Search (20 results, page 1 of 1)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.03
    0.028830929 = product of:
      0.12973918 = sum of:
        0.028470935 = weight(_text_:data in 604) [ClassicSimilarity], result of:
          0.028470935 = score(doc=604,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.24455236 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.10126825 = weight(_text_:germany in 604) [ClassicSimilarity], result of:
          0.10126825 = score(doc=604,freq=2.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.46121946 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.22222222 = coord(2/9)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  2. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.03
    0.027166676 = product of:
      0.12225004 = sum of:
        0.076776505 = weight(_text_:readable in 3654) [ClassicSimilarity], result of:
          0.076776505 = score(doc=3654,freq=2.0), product of:
            0.2262076 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.036818076 = queryNorm
            0.33940727 = fieldWeight in 3654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3654)
        0.04547354 = weight(_text_:data in 3654) [ClassicSimilarity], result of:
          0.04547354 = score(doc=3654,freq=10.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.39059696 = fieldWeight in 3654, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3654)
      0.22222222 = coord(2/9)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
  3. Manguinhas, H.; Charles, V.; Isaac, A.; Miles, T.; Lima, A.; Neroulidis, A.; Ginouves, V.; Atsidis, D.; Hildebrand, M.; Brinkerink, M.; Gordea, S.: Linking subject labels in cultural heritage metadata to MIMO vocabulary using CultuurLink (2016) 0.03
    0.02695852 = product of:
      0.12131333 = sum of:
        0.034511987 = weight(_text_:data in 3107) [ClassicSimilarity], result of:
          0.034511987 = score(doc=3107,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.29644224 = fieldWeight in 3107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3107)
        0.08680135 = weight(_text_:germany in 3107) [ClassicSimilarity], result of:
          0.08680135 = score(doc=3107,freq=2.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.39533097 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.046875 = fieldNorm(doc=3107)
      0.22222222 = coord(2/9)
    
    Abstract
    The Europeana Sounds project aims to increase the amount of cultural audio content in Europeana. It also strongly focuses on enriching the metadata records that are aggregated by Europeana. To provide metadata to Europeana, Data Providers are asked to convert their records from the format and model they use internally to a specific profile of the Europeana Data Model (EDM) for sound resources. These metadata include subjects, which typically use a vocabulary internal to each partner.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  4. Takhirov, N.; Aalberg, T.; Duchateau, F.; Zumer, M.: FRBR-ML: a FRBR-based framework for semantic interoperability (2012) 0.02
    0.021186067 = product of:
      0.0953373 = sum of:
        0.049321324 = weight(_text_:bibliographic in 134) [ClassicSimilarity], result of:
          0.049321324 = score(doc=134,freq=8.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.34409973 = fieldWeight in 134, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
        0.04601598 = weight(_text_:data in 134) [ClassicSimilarity], result of:
          0.04601598 = score(doc=134,freq=16.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.3952563 = fieldWeight in 134, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
      0.22222222 = coord(2/9)
    
    Abstract
    Metadata related to cultural items such as literature, music and movies is a valuable resource that is currently exploited in many applications and services based on semantic web technologies. A vast amount of such information has been created by memory institutions in the last decades using different standard or ad hoc schemas, and a main challenge is to make this legacy data accessible as reusable semantic data. On one hand, this is a syntactic problem that can be solved by transforming to formats that are compatible with the tools and services used for semantic aware services. On the other hand, this is a semantic problem. Simply transforming from one format to another does not automatically enable semantic interoperability and legacy data often needs to be reinterpreted as well as transformed. The conceptual model in the Functional Requirements for Bibliographic Records, initially developed as a conceptual framework for library standards and systems, is a major step towards a shared semantic model of the products of artistic and intellectual endeavor of mankind. The model is generally accepted as sufficiently generic to serve as a conceptual framework for a broad range of cultural heritage metadata. Unfortunately, the existing large body of legacy data makes a transition to this model difficult. For instance, most bibliographic data is still only available in various MARC-based formats which is hard to render into reusable and meaningful semantic data. Making legacy bibliographic data accessible as semantic data is a complex problem that includes interpreting and transforming the information. In this article, we present our work on transforming and enhancing legacy bibliographic information into a representation where the structure and semantics of the FRBR model is explicit.
  5. Suchowolec, K.; Lang, C.; Schneider, R.: Re-designing online terminology resources for German grammar (2016) 0.02
    0.02059352 = product of:
      0.09267084 = sum of:
        0.020336384 = weight(_text_:data in 3108) [ClassicSimilarity], result of:
          0.020336384 = score(doc=3108,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.17468026 = fieldWeight in 3108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3108)
        0.07233446 = weight(_text_:germany in 3108) [ClassicSimilarity], result of:
          0.07233446 = score(doc=3108,freq=2.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.32944247 = fieldWeight in 3108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3108)
      0.22222222 = coord(2/9)
    
    Abstract
    The compilation of terminological vocabularies plays a central role in the organization and retrieval of scientific texts. Both simple keyword lists as well as sophisticated modellings of relationships between terminological concepts can make a most valuable contribution to the analysis, classification, and finding of appropriate digital documents, either on the Web or within local repositories. This seems especially true for long-established scientific fields with various theoretical and historical branches, such as linguistics, where the use of terminology within documents from different origins is sometimes far from being consistent. In this short paper, we report on the early stages of a project that aims at the re-design of an existing domain-specific KOS for grammatical content grammis. In particular, we deal with the terminological part of grammis and present the state-of-the-art of this online resource as well as the key re-design principles. Further, we propose questions regarding ramifications of the Linked Open Data and Semantic Web approaches for our re-design decisions.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  6. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.02
    0.017972346 = product of:
      0.08087556 = sum of:
        0.02300799 = weight(_text_:data in 3109) [ClassicSimilarity], result of:
          0.02300799 = score(doc=3109,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.19762816 = fieldWeight in 3109, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.057867568 = weight(_text_:germany in 3109) [ClassicSimilarity], result of:
          0.057867568 = score(doc=3109,freq=2.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.26355398 = fieldWeight in 3109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
      0.22222222 = coord(2/9)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  7. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.02
    0.015549123 = product of:
      0.069971055 = sum of:
        0.048807316 = weight(_text_:data in 1967) [ClassicSimilarity], result of:
          0.048807316 = score(doc=1967,freq=8.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.4192326 = fieldWeight in 1967, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.02116374 = product of:
          0.04232748 = sum of:
            0.04232748 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.04232748 = score(doc=1967,freq=4.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  8. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.012827373 = product of:
      0.05772318 = sum of:
        0.040263984 = weight(_text_:data in 759) [ClassicSimilarity], result of:
          0.040263984 = score(doc=759,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.34584928 = fieldWeight in 759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.017459193 = product of:
          0.034918386 = sum of:
            0.034918386 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.034918386 = score(doc=759,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  9. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.01
    0.0063911085 = product of:
      0.057519976 = sum of:
        0.057519976 = weight(_text_:data in 5309) [ClassicSimilarity], result of:
          0.057519976 = score(doc=5309,freq=16.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.49407038 = fieldWeight in 5309, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
      0.11111111 = coord(1/9)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
  10. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.01
    0.0050526154 = product of:
      0.04547354 = sum of:
        0.04547354 = weight(_text_:data in 3870) [ClassicSimilarity], result of:
          0.04547354 = score(doc=3870,freq=10.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.39059696 = fieldWeight in 3870, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
      0.11111111 = coord(1/9)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  11. Suominen, O.; Hyvönen, N.: From MARC silos to Linked Data silos? (2017) 0.00
    0.0046964865 = product of:
      0.042268377 = sum of:
        0.042268377 = weight(_text_:data in 3732) [ClassicSimilarity], result of:
          0.042268377 = score(doc=3732,freq=6.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.3630661 = fieldWeight in 3732, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3732)
      0.11111111 = coord(1/9)
    
    Abstract
    Seit einiger Zeit stellen Bibliotheken ihre bibliografischen Metadadaten verstärkt offen in Form von Linked Data zur Verfügung. Dabei kommen jedoch ganz unterschiedliche Modelle für die Strukturierung der bibliografischen Daten zur Anwendung. Manche Bibliotheken verwenden ein auf FRBR basierendes Modell mit mehreren Schichten von Entitäten, während andere flache, am Datensatz orientierte Modelle nutzen. Der Wildwuchs bei den Datenmodellen erschwert die Nachnutzung der bibliografischen Daten. Im Ergebnis haben die Bibliotheken die früheren MARC-Silos nur mit zueinander inkompatiblen Linked-Data-Silos vertauscht. Deshalb ist es häufig schwierig, Datensets miteinander zu kombinieren und nachzunutzen. Kleinere Unterschiede in der Datenmodellierung lassen sich zwar durch Schema Mappings in den Griff bekommen, doch erscheint es fraglich, ob die Interoperabilität insgesamt zugenommen hat. Der Beitrag stellt die Ergebnisse einer Studie zu verschiedenen veröffentlichten Sets von bibliografischen Daten vor. Dabei werden auch die unterschiedlichen Modelle betrachtet, um bibliografische Daten als RDF darzustellen, sowie Werkzeuge zur Erzeugung von entsprechenden Daten aus dem MARC-Format. Abschließend wird der von der Finnischen Nationalbibliothek verfolgte Ansatz behandelt.
  12. Kahlawi, A,: ¬An ontology driven ESCO LOD quality enhancement (2020) 0.00
    0.0046964865 = product of:
      0.042268377 = sum of:
        0.042268377 = weight(_text_:data in 5959) [ClassicSimilarity], result of:
          0.042268377 = score(doc=5959,freq=6.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.3630661 = fieldWeight in 5959, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5959)
      0.11111111 = coord(1/9)
    
    Abstract
    The labor market is a system that is complex and difficult to manage. To overcome this challenge, the European Union has launched the ESCO project which is a language that aims to describe this labor market. In order to support the spread of this project, its dataset was presented as linked open data (LOD). Since LOD is usable and reusable, a set of conditions have to be met. First, LOD must be feasible and high quality. In addition, it must provide the user with the right answers, and it has to be built according to a clear and correct structure. This study investigates the LOD of ESCO, focusing on data quality and data structure. The former is evaluated through applying a set of SPARQL queries. This provides solutions to improve its quality via a set of rules built in first order logic. This process was conducted based on a new proposed ESCO ontology.
  13. Kollia, I.; Tzouvaras, V.; Drosopoulos, N.; Stamou, G.: ¬A systemic approach for effective semantic access to cultural content (2012) 0.00
    0.0039137388 = product of:
      0.035223648 = sum of:
        0.035223648 = weight(_text_:data in 130) [ClassicSimilarity], result of:
          0.035223648 = score(doc=130,freq=6.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.30255508 = fieldWeight in 130, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
      0.11111111 = coord(1/9)
    
    Abstract
    A large on-going activity for digitization, dissemination and preservation of cultural heritage is taking place in Europe, United States and the world, which involves all types of cultural institutions, i.e., galleries, libraries, museums, archives and all types of cultural content. The development of Europeana, as a single point of access to European Cultural Heritage, has probably been the most important result of the activities in the field till now. Semantic interoperability, linked open data, user involvement and user generated content are key issues in these developments. This paper presents a system that provides content providers and users the ability to map, in an effective way, their own metadata schemas to common domain standards and the Europeana (ESE, EDM) data models. The system is currently largely used by many European research projects and the Europeana. Based on these mappings, semantic query answering techniques are proposed as a means for effective access to digital cultural heritage, providing users with content enrichment, linking of data based on their involvement and facilitating content search and retrieval. An experimental study is presented, involving content from national content aggregators, as well as thematic content aggregators and the Europeana, which illustrates the proposed system
  14. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.00
    0.0039137388 = product of:
      0.035223648 = sum of:
        0.035223648 = weight(_text_:data in 5864) [ClassicSimilarity], result of:
          0.035223648 = score(doc=5864,freq=6.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.30255508 = fieldWeight in 5864, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
      0.11111111 = coord(1/9)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
  15. Lange, C.; Mossakowski, T.; Galinski, C.; Kutz, O.: Making heterogeneous ontologies interoperable through standardisation : a Meta Ontology Language to be standardised: Ontology Integration and Interoperability (OntoIOp) (2011) 0.00
    0.0038346653 = product of:
      0.034511987 = sum of:
        0.034511987 = weight(_text_:data in 50) [ClassicSimilarity], result of:
          0.034511987 = score(doc=50,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.29644224 = fieldWeight in 50, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=50)
      0.11111111 = coord(1/9)
    
    Abstract
    Assistive technology, especially for persons with disabilities, increasingly relies on electronic communication among users, between users and their devices, and among these devices. Making such ICT accessible and inclusive often requires remedial programming, which tends to be costly or even impossible. We, therefore, aim at more interoperable devices, services accessing these devices, and content delivered by these services, at the levels of 1. data and metadata, 2. datamodels and data modelling methods and 3. metamodels as well as a meta ontology language. Even though ontologies are widely being used to enable content interoperability, there is currently no unified framework for ontology interoperability itself. This paper outlines the design considerations underlying OntoIOp (Ontology Integration and Interoperability), a new standardisation activity in ISO/TC 37/SC 3 to become an international standard, which aims at filling this gap.
  16. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi­automatic matching­procedure for building up vocabulary crosswalks (2013) 0.00
    0.0038346653 = product of:
      0.034511987 = sum of:
        0.034511987 = weight(_text_:data in 989) [ClassicSimilarity], result of:
          0.034511987 = score(doc=989,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.29644224 = fieldWeight in 989, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=989)
      0.11111111 = coord(1/9)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated and high quality search scenarios in distributed data environments. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different data sources available online. In the past, crosswalks between different thesauri have primarily been developed manually. In the long run the intellectual updating of such crosswalks requires huge personnel expenses. Therefore, an integration of automatic matching procedures, as for example Ontology Matching Tools, seems an obvious need. On the basis of computer generated correspondences between the Thesaurus for Economics (STW) and the Thesaurus for the Social Sciences (TheSoz) our contribution will explore cross-border approaches between IT-assisted tools and procedures on the one hand and external quality measurements via domain experts on the other hand. The techniques that emerge enable semi-automatically performed vocabulary crosswalks.
  17. Doerr, M.: Semantic problems of thesaurus mapping (2001) 0.00
    0.0031955543 = product of:
      0.028759988 = sum of:
        0.028759988 = weight(_text_:data in 5902) [ClassicSimilarity], result of:
          0.028759988 = score(doc=5902,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.24703519 = fieldWeight in 5902, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5902)
      0.11111111 = coord(1/9)
    
    Abstract
    With networked information access to heterogeneous data sources, the problem of terminology provision and interoperability of controlled vocabulary schemes such as thesauri becomes increasingly urgent. Solutions are needed to improve the performance of full-text retrieval systems and to guide the design of controlled terminology schemes for use in structured data, including metadata. Thesauri are created in different languages, with different scope and points of view and at different levels of abstraction and detail, to accomodate access to a specific group of collections. In any wider search accessing distributed collections, the user would like to start with familiar terminology and let the system find out the correspondences to other terminologies in order to retrieve equivalent results from all addressed collections. This paper investigates possible semantic differences that may hinder the unambiguous mapping and transition from one thesaurus to another. It focusses on the differences of meaning of terms and their relations as intended by their creators for indexing and querying a specific collection, in contrast to methods investigating the statistical relevance of terms for objects in a collection. It develops a notion of optimal mapping, paying particular attention to the intellectual quality of mappings between terms from different vocabularies and to problems of polysemy. Proposals are made to limit the vagueness introduced by the transition from one vocabulary to another. The paper shows ways in which thesaurus creators can improve their methodology to meet the challenges of networked access of distributed collections created under varying conditions. For system implementers, the discussion will lead to a better understanding of the complexity of the problem
  18. Naudet, Y.; Latour, T.; Chen, D.: ¬A Systemic approach to Interoperability formalization (2009) 0.00
    0.0027115175 = product of:
      0.024403658 = sum of:
        0.024403658 = weight(_text_:data in 2740) [ClassicSimilarity], result of:
          0.024403658 = score(doc=2740,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2096163 = fieldWeight in 2740, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2740)
      0.11111111 = coord(1/9)
    
    Content
    Vgl.: http://www.nt.ntnu.no/users/skoge/prost/proceedings/ifac2008/data/papers/2889.pdf.
  19. Krötzsch, M.; Hitzler, P.; Ehrig, M.; Sure, Y.: Category theory in ontology research : concrete gain from an abstract approach (2004 (?)) 0.00
    0.0027115175 = product of:
      0.024403658 = sum of:
        0.024403658 = weight(_text_:data in 4538) [ClassicSimilarity], result of:
          0.024403658 = score(doc=4538,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2096163 = fieldWeight in 4538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4538)
      0.11111111 = coord(1/9)
    
    Abstract
    The focus of research on representing and reasoning with knowledge traditionally has been on single specifications and appropriate inference paradigms to draw conclusions from such data. Accordingly, this is also an essential aspect of ontology research which has received much attention in recent years. But ontologies introduce another new challenge based on the distributed nature of most of their applications, which requires to relate heterogeneous ontological specifications and to integrate information from multiple sources. These problems have of course been recognized, but many current approaches still lack the deep formal backgrounds on which todays reasoning paradigms are already founded. Here we propose category theory as a well-explored and very extensive mathematical foundation for modelling distributed knowledge. A particular prospect is to derive conclusions from the structure of those distributed knowledge bases, as it is for example needed when merging ontologies
  20. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.00
    0.0016627803 = product of:
      0.014965023 = sum of:
        0.014965023 = product of:
          0.029930046 = sum of:
            0.029930046 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.029930046 = score(doc=4820,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    3.12.2016 18:39:22