Search (19 results, page 1 of 1)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.01
    0.011211801 = product of:
      0.044847205 = sum of:
        0.032420702 = weight(_text_:retrieval in 5864) [ClassicSimilarity], result of:
          0.032420702 = score(doc=5864,freq=8.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33420905 = fieldWeight in 5864, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.012426502 = product of:
          0.024853004 = sum of:
            0.024853004 = weight(_text_:system in 5864) [ClassicSimilarity], result of:
              0.024853004 = score(doc=5864,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.24605882 = fieldWeight in 5864, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5864)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
    Content
    Vgl.: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval. Vgl. auch: http://semantic-web-journal.net/content/similarity-based-knowledge-graph-queries-recommendation-retrieval-1.
  2. Kollia, I.; Tzouvaras, V.; Drosopoulos, N.; Stamou, G.: ¬A systemic approach for effective semantic access to cultural content (2012) 0.01
    0.007857411 = product of:
      0.031429645 = sum of:
        0.016210351 = weight(_text_:retrieval in 130) [ClassicSimilarity], result of:
          0.016210351 = score(doc=130,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.16710453 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
        0.015219294 = product of:
          0.030438587 = sum of:
            0.030438587 = weight(_text_:system in 130) [ClassicSimilarity], result of:
              0.030438587 = score(doc=130,freq=6.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30135927 = fieldWeight in 130, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=130)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    A large on-going activity for digitization, dissemination and preservation of cultural heritage is taking place in Europe, United States and the world, which involves all types of cultural institutions, i.e., galleries, libraries, museums, archives and all types of cultural content. The development of Europeana, as a single point of access to European Cultural Heritage, has probably been the most important result of the activities in the field till now. Semantic interoperability, linked open data, user involvement and user generated content are key issues in these developments. This paper presents a system that provides content providers and users the ability to map, in an effective way, their own metadata schemas to common domain standards and the Europeana (ESE, EDM) data models. The system is currently largely used by many European research projects and the Europeana. Based on these mappings, semantic query answering techniques are proposed as a means for effective access to digital cultural heritage, providing users with content enrichment, linking of data based on their involvement and facilitating content search and retrieval. An experimental study is presented, involving content from national content aggregators, as well as thematic content aggregators and the Europeana, which illustrates the proposed system
  3. Balakrishnan, U.; Voß, J.: ¬The Cocoda mapping tool (2015) 0.01
    0.005011449 = product of:
      0.020045796 = sum of:
        0.011347245 = weight(_text_:retrieval in 4205) [ClassicSimilarity], result of:
          0.011347245 = score(doc=4205,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.11697317 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.008698551 = product of:
          0.017397102 = sum of:
            0.017397102 = weight(_text_:system in 4205) [ClassicSimilarity], result of:
              0.017397102 = score(doc=4205,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.17224117 = fieldWeight in 4205, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4205)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Since the 90s, we have seen an explosion of information and with it there is an increase in the need for data and information aggregation systems that store and manage information. However, most of the information sources apply different Knowledge Organizations Systems (KOS) to describe the content of stored data. This heterogeneous mix of KOS in different systems complicate access and seamless sharing of information and knowledge. Concordances also known as cross-concordances or terminology mappings map different (KOS) to each other for improvement of information retrieval in such heterogeneous mix of systems. (Mayr 2010, Keil 2012). Also for coherent indexing with different terminologies, mappings are considered to be a valuable and essential working tool. However, despite efforts at standardization (e.g. SKOS, ISO 25964-2, Keil 2012, Soergel 2011); there is a significant scarcity of concordances that has led an inability to establish uniform exchange formats as well as methods and tools for maintaining mappings and making them easily accessible. This is particularly true in the field of library classification schemes. In essence, there is a lack of infrastructure for provision/exchange of concordances, their management and quality assessment as well as tools that would enable semi-automatic generation of mappings. The project "coli-conc" therefore aims to address this gap by creating the necessary infrastructure. This includes the specification of a data format for exchange of concordances (JSKOS), specification and implementation of web APIs to query concordance databases (JSKOS-API), and a modular web application to enable uniform access to knowledge organization systems, concordances and concordance assessments (Cocoda).
    The focus of the project "coli-conc" lies in semi-automatic creation of mappings between different KOS in general and the two important library classification schemes in particular - Dewey classification system (DDC) and Regensburg classification system (RVK). In the year 2000, the national libraries of Germany, Austria and Switzerland adopted DDC in an endeavor to develop a nation-wide classification scheme. But historically, in the German speaking regions, the academic libraries have been using their own home-grown systems, the most prominent and popular being the RVK. However, with the launch of DDC, building concordances between DDC and RVK has become an imperative, although it is still rare. The delay in building comprehensive concordances between these two systems has been because of major challenges posed by the sheer largeness of these two systems (38.000 classes in DDC and ca. 860.000 classes in RVK), the strong disparity in their respective structure, the variation in the perception and representation of the concepts. The challenge is compounded geometrically for any manual attempt in this direction. Although there have been efforts on automatic mappings (OAEI Library Track 2012 -- 2014 and e.g. Pfeffer 2013) in the recent years; such concordances carry the risks of inaccurate mappings, and the approaches are rather more suitable for mapping suggestions than for automatic generation of concordances (Lauser 2008; Reiner 2010). The project "coli-conc" will facilitate the creation, evaluation, and reuse of mappings with a public collection of concordances and a web application of mapping management. The proposed presentation will give an introduction to the tools and standards created and planned in the project "coli-conc". This includes preliminary work on DDC concordances (Balakrishnan 2013), an overview of the software concept, technical architecture (Voß 2015) and a demonstration of the Cocoda web application.
  4. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.00
    0.004999443 = product of:
      0.019997772 = sum of:
        0.012968281 = weight(_text_:retrieval in 3109) [ClassicSimilarity], result of:
          0.012968281 = score(doc=3109,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.13368362 = fieldWeight in 3109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.0070294905 = product of:
          0.014058981 = sum of:
            0.014058981 = weight(_text_:system in 3109) [ClassicSimilarity], result of:
              0.014058981 = score(doc=3109,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.13919188 = fieldWeight in 3109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  5. Wicaksana, I.W.S.; Wahyudi, B.: Comparison Latent Semantic and WordNet approach for semantic similarity calculation (2011) 0.00
    0.004547893 = product of:
      0.036383145 = sum of:
        0.036383145 = product of:
          0.07276629 = sum of:
            0.07276629 = weight(_text_:etc in 689) [ClassicSimilarity], result of:
              0.07276629 = score(doc=689,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.41891038 = fieldWeight in 689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=689)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this paper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
  6. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.00
    0.0038018425 = product of:
      0.03041474 = sum of:
        0.03041474 = product of:
          0.06082948 = sum of:
            0.06082948 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.06082948 = score(doc=8365,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 6.2015 16:08:38
  7. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0031246517 = product of:
      0.012498607 = sum of:
        0.008105176 = weight(_text_:retrieval in 4232) [ClassicSimilarity], result of:
          0.008105176 = score(doc=4232,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.08355226 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.0043934314 = product of:
          0.008786863 = sum of:
            0.008786863 = weight(_text_:system in 4232) [ClassicSimilarity], result of:
              0.008786863 = score(doc=4232,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.08699492 = fieldWeight in 4232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
  8. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.00
    0.0023042648 = product of:
      0.018434118 = sum of:
        0.018434118 = product of:
          0.036868237 = sum of:
            0.036868237 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.036868237 = score(doc=1967,freq=4.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  9. Haffner, A.: Internationalisierung der GND durch das Semantic Web (2012) 0.00
    0.0022739465 = product of:
      0.018191572 = sum of:
        0.018191572 = product of:
          0.036383145 = sum of:
            0.036383145 = weight(_text_:etc in 318) [ClassicSimilarity], result of:
              0.036383145 = score(doc=318,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20945519 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Die Gemeinsame Normdatei (GND) ist seit April 2012 die Datei, die die im deutschsprachigen Bibliothekswesen verwendeten Normdaten enthält. Folglich muss auf Basis dieser Daten eine Repräsentation für die Darstellung als Linked Data im Semantic Web etabliert werden. Neben der eigentlichen Bereitstellung von GND-Daten im Semantic Web sollen die Daten mit bereits als Linked Data vorhandenen Datenbeständen (DBpedia, VIAF etc.) verknüpft und nach Möglichkeit kompatibel sein, wodurch die GND einem internationalen und spartenübergreifenden Publikum zugänglich gemacht wird. Dieses Dokument dient vor allem zur Beschreibung, wie die GND-Linked-Data-Repräsentation entstand und dem Weg zur Spezifikation einer eignen Ontologie. Hierfür werden nach einer kurzen Einführung in die GND die Grundprinzipien und wichtigsten Standards für die Veröffentlichung von Linked Data im Semantic Web vorgestellt, um darauf aufbauend existierende Vokabulare und Ontologien des Bibliothekswesens betrachten zu können. Anschließend folgt ein Exkurs in das generelle Vorgehen für die Bereitstellung von Linked Data, wobei die so oft zitierte Open World Assumption kritisch hinterfragt und damit verbundene Probleme insbesondere in Hinsicht Interoperabilität und Nachnutzbarkeit aufgedeckt werden. Um Probleme der Interoperabilität zu vermeiden, wird den Empfehlungen der Library Linked Data Incubator Group [LLD11] gefolgt.
  10. Suchowolec, K.; Lang, C.; Schneider, R.: Re-designing online terminology resources for German grammar (2016) 0.00
    0.002026294 = product of:
      0.016210351 = sum of:
        0.016210351 = weight(_text_:retrieval in 3108) [ClassicSimilarity], result of:
          0.016210351 = score(doc=3108,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.16710453 = fieldWeight in 3108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3108)
      0.125 = coord(1/8)
    
    Abstract
    The compilation of terminological vocabularies plays a central role in the organization and retrieval of scientific texts. Both simple keyword lists as well as sophisticated modellings of relationships between terminological concepts can make a most valuable contribution to the analysis, classification, and finding of appropriate digital documents, either on the Web or within local repositories. This seems especially true for long-established scientific fields with various theoretical and historical branches, such as linguistics, where the use of terminology within documents from different origins is sometimes far from being consistent. In this short paper, we report on the early stages of a project that aims at the re-design of an existing domain-specific KOS for grammatical content grammis. In particular, we deal with the terminological part of grammis and present the state-of-the-art of this online resource as well as the key re-design principles. Further, we propose questions regarding ramifications of the Linked Open Data and Semantic Web approaches for our re-design decisions.
  11. Stahn, L.-L.: Ontology matching for archaeological KOS (2015) 0.00
    0.0017573726 = product of:
      0.014058981 = sum of:
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 4964) [ClassicSimilarity], result of:
              0.028117962 = score(doc=4964,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 4964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4964)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    The work outlined in this paper (based on a Bachelor's thesis) is aimed at implementing multiple archaeological KOS in the Simple Knowledge Organization System and create automated alignments between them. The LOD implementation and alignment permit the interoperability and more prevalent use of archaeological KOS. This feasibility study allows statements regarding the automated extension of knowledge organization at the German Archaeological Institute (DAI).
  12. Mitchell, J.S.; Panzer, M.: Dewey linked data : Making connections with old friends and new acquaintances (2012) 0.00
    0.0015533128 = product of:
      0.012426502 = sum of:
        0.012426502 = product of:
          0.024853004 = sum of:
            0.024853004 = weight(_text_:system in 305) [ClassicSimilarity], result of:
              0.024853004 = score(doc=305,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.24605882 = fieldWeight in 305, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=305)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    This paper explores the history, uses cases, and future plans associated with availability of the Dewey Decimal Classification (DDC) system as linked data. Parts of the Dewey Decimal Classification (DDC) system have been available as linked data since 2009. Initial efforts included the DDC Summaries (the top three levels of the DDC) in eleven languages exposed as linked data in dewey.info. In 2010, the content of dewey.info was further extended by the addition of assignable numbers and captions from the Abridged Edition 14 data files in English, Italian, and Vietnamese. During 2012, we will add assignable numbers and captions from the latest full edition database, DDC 23. In addition to the "old friends" of different Dewey language versions, institutions such as the British Library and Deutsche Nationalbibliothek have made use of Dewey linked data in bibliographic records and authority files, and AGROVOC has linked to our data at a general level. We expect to extend our linked data network shortly to "new acquaintances" such as GeoNames, ISO 639-3 language codes, and Mathematics Subject Classification. In particular, we will examine the linking process to GeoNames as an example of cross-domain vocabulary alignment. In addition to linking plans, we report on use cases that facilitate machine-assisted categorization and support discovery in the Semantic Web environment.
  13. Kempf, A.O.; Neubert, J.; Faden, M.: ¬The missing link : a vocabulary mapping effort in economics (2015) 0.00
    0.0015533128 = product of:
      0.012426502 = sum of:
        0.012426502 = product of:
          0.024853004 = sum of:
            0.024853004 = weight(_text_:system in 2251) [ClassicSimilarity], result of:
              0.024853004 = score(doc=2251,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.24605882 = fieldWeight in 2251, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2251)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    In economics there exists an internationally established classification system. Research literature is usually classified according to the JEL classification codes, a classification system originated by the Journal of Economic Literature and published by the American Economic Association (AEA). Complementarily to keywords which are usually assigned freely, economists widely use the JEL codes when classifying their publications. In cooperation with KU Leuven, ZBW - Leibniz Information Centre for Economics has published an unofficial multilingual version of JEL in SKOS format. In addition to this, exists the STW Thesaurus for Economics a bilingual domain-specific controlled vocabulary maintained by the German National Library of Economics (ZBW). Developed in the mid-1990s and since then constantly updated according to the current terminology usage in the latest international research literature in economics it covers all sub-fields both in the economics as well as in business economics and business practice containing subject headings which are clearly delimited from each other. It has been published on the web as Linked Open Data in the year 2009.
  14. Ledl, A.: Demonstration of the BAsel Register of Thesauri, Ontologies & Classifications (BARTOC) (2015) 0.00
    0.0013180295 = product of:
      0.010544236 = sum of:
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 2038) [ClassicSimilarity], result of:
              0.021088472 = score(doc=2038,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 2038, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2038)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    The BAsel Register of Thesauri, Ontologies & Classifications (BARTOC, http://bartoc.org) is a bibliographic database aiming to record metadata of as many Knowledge Organization Systems as possible. It has a facetted, responsive web design search interface in 20 EU languages. With more than 1'300 interdisciplinary items in 77 languages, BARTOC is the largest database of its kind, multilingual both by content and features, and it is still growing. This being said, the demonstration of BARTOC would be suitable for topic nr. 10 [Multilingual and Interdisciplinary KOS applications and tools]. BARTOC has been developed by the University Library of Basel, Switzerland. It is rooted in the tradition of library and information science of collecting bibliographic records of controlled and structured vocabularies, yet in a more contemporary manner. BARTOC is based on the open source content management system Drupal 7.
  15. Mayr, P.; Zapilko, B.; Sure, Y.: ¬Ein Mehr-Thesauri-Szenario auf Basis von SKOS und Crosskonkordanzen (2010) 0.00
    0.0013180295 = product of:
      0.010544236 = sum of:
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 3392) [ClassicSimilarity], result of:
              0.021088472 = score(doc=3392,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 3392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3392)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Im August 2009 wurde SKOS "Simple Knowledge Organization System" als neuer Standard für web-basierte kontrollierte Vokabulare durch das W3C veröffentlicht1. SKOS dient als Datenmodell, um kontrollierte Vokabulare über das Web anzubieten sowie technisch und semantisch interoperabel zu machen. Perspektivisch kann die heterogene Landschaft der Erschließungsvokabulare über SKOS vereinheitlicht und vor allem die Inhalte der klassischen Datenbanken (Bereich Fachinformation) für Anwendungen des Semantic Web, beispielsweise als Linked Open Data2 (LOD), zugänglich und stär-ker miteinander vernetzt werden. Vokabulare im SKOS-Format können dabei eine relevante Funktion einnehmen, indem sie als standardisiertes Brückenvokabular dienen und semantische Verlinkung zwischen erschlossenen, veröffentlichten Daten herstellen. Die folgende Fallstudie skizziert ein Szenario mit drei thematisch verwandten Thesauri, die ins SKOS-Format übertragen und inhaltlich über Crosskonkordanzen aus dem Projekt KoMoHe verbunden werden. Die Mapping Properties von SKOS bieten dazu standardisierte Relationen, die denen der Crosskonkordanzen entsprechen. Die beteiligten Thesauri der Fallstudie sind a) TheSoz (Thesaurus Sozialwissenschaften, GESIS), b) STW (Standard-Thesaurus Wirtschaft, ZBW) und c) IBLK-Thesaurus (SWP).
  16. EDUG's recommendations for best practice in mapping involving Dewey Decimal Classification (DDC) (2015) 0.00
    0.0013180295 = product of:
      0.010544236 = sum of:
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 2113) [ClassicSimilarity], result of:
              0.021088472 = score(doc=2113,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 2113, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2113)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    For some years mapping has been one of the main tasks in the EDUG member countries. While the ISO standard on mapping and interoperability with other vocabularies (ISO 25964-2) gives some advice on creating mappings between a thesaurus and e.g. a classification system, it does not deal with Dewey Decimal Classification specifically. The EDUG members have felt a growing need to discuss and record the knowledge acquired in mapping projects where either the source or the target vocabulary is DDC. The recommendations below are the result of a seminar on mapping in connection with the EDUG annual meeting in April 2015. The recommendations are not exhaustive and will be subject to change as EDUG members gain more experience in this field of work. We still hope that institutions planning to embark on a mapping project to/from DDC, may find the guidelines helpful.
  17. Busch, D.: Organisation eines Thesaurus für die Unterstützung der mehrsprachigen Suche in einer bibliographischen Datenbank im Bereich Planen und Bauen (2016) 0.00
    0.0010983578 = product of:
      0.008786863 = sum of:
        0.008786863 = product of:
          0.017573725 = sum of:
            0.017573725 = weight(_text_:system in 3308) [ClassicSimilarity], result of:
              0.017573725 = score(doc=3308,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.17398985 = fieldWeight in 3308, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3308)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Das Problem der mehrsprachigen Suche gewinnt in der letzten Zeit immer mehr an Bedeutung, da viele nützliche Fachinformationen in der Welt in verschiedenen Sprachen publiziert werden. RSWBPlus ist eine bibliographische Datenbank zum Nachweis der Fachliteratur im Bereich Planen und Bauen, welche deutsch- und englischsprachige Metadaten-Einträge enthält. Bis vor Kurzem war es problematisch Einträge zu finden, deren Sprache sich von der Anfragesprache unterschied. Zum Beispiel fand man auf deutschsprachige Anfragen nur deutschsprachige Einträge, obwohl die Datenbank auch potenziell nützliche englischsprachige Einträge enthielt. Um das Problem zu lösen, wurde nach einer Untersuchung bestehender Ansätze, die RSWBPlus weiterentwickelt, um eine mehrsprachige (sprachübergreifende) Suche zu unterstützen, welche unter Einbeziehung eines zweisprachigen begriffbasierten Thesaurus erfolgt. Der Thesaurus wurde aus bereits bestehenden Thesauri automatisch gebildet. Die Einträge der Quell-Thesauri wurden in SKOS-Format (Simple Knowledge Organisation System) umgewandelt, automatisch miteinander vereinigt und schließlich in einen Ziel-Thesaurus eingespielt, der ebenfalls in SKOS geführt wird. Für den Zugriff zum Ziel-Thesaurus werden Apache Jena und MS SQL Server verwendet. Bei der mehrsprachigen Suche werden Terme der Anfrage durch entsprechende Übersetzungen und Synonyme in Deutsch und Englisch erweitert. Die Erweiterung der Suchterme kann sowohl in der Laufzeit, als auch halbautomatisch erfolgen. Das verbesserte Recherchesystem kann insbesondere deutschsprachigen Benutzern helfen, relevante englischsprachige Einträge zu finden. Die Verwendung vom SKOS erhöht die Interoperabilität der Thesauri, vereinfacht das Bilden des Ziel-Thesaurus und den Zugriff zu seinen Einträgen.
  18. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.00
    0.001013147 = product of:
      0.008105176 = sum of:
        0.008105176 = weight(_text_:retrieval in 3370) [ClassicSimilarity], result of:
          0.008105176 = score(doc=3370,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.08355226 = fieldWeight in 3370, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3370)
      0.125 = coord(1/8)
    
    Content
    Another challenge appears when, e.g., mapping Dewey class 890 Literatures of other specific languages and language families, which does not make sense in UDC in which all languages and literatures have equal status. Standard UDC schedules do not have a selection of preferred literatures and other literatures. In principle, UDC does not allow classes entitled 'others' which do not have defined semantic content. If entities are subdivided and there is no provision for an item outside the listed subclasses then this item is subsumed to a top class or a broader class where all unspecifiied or general members of that class may be expected. If specification is needed this can be divised by adding an alphabetical extension to the broader class. Here we have to find and list in the UDC Summary all literatures that are 'unpreferred' i.e. lumped in the 890 classes and map them again as many-to-one specific-to-broader match. The example below illustrates another interesting case. Classes Dewey 061 and UDC 06 cover roughy the same semantic field but in the subdivision the Dewey Summaries lists a combination of subject and place and as an enumerative classification, provides ready made numbers for combinations of place that are most common in an average (American?) library. This is a frequent approach in the schemes created with the physical book arrangement, i.e. library schelves, in mind. UDC, designed as an indexing language for information retrieval, keeps subject and place in separate tables and allows for any concept of place such as, e.g. (7) North America to be used in combination with any subject as these may coincide in documents. Thus combinations such as Newspapers in North America, or Organizations in North America would not be offered as ready made combinations. There is no selection of 'preferred' or 'most needed countries' or languages or cultures in the standard UDC edition: <Tabelle>
  19. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.00
    8.786863E-4 = product of:
      0.0070294905 = sum of:
        0.0070294905 = product of:
          0.014058981 = sum of:
            0.014058981 = weight(_text_:system in 3965) [ClassicSimilarity], result of:
              0.014058981 = score(doc=3965,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.13919188 = fieldWeight in 3965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.