Search (17 results, page 1 of 1)

  • × year_i:[2010 TO 2020}
  • × type_ss:"r"
  1. Deokattey, S.; Sharma, S.B.K.; Kumar, G.R.; Bhanumurthy, K.: Knowledge organization research : an overview (2015) 0.02
    0.021157425 = product of:
      0.06347227 = sum of:
        0.06347227 = sum of:
          0.021921717 = weight(_text_:of in 2092) [ClassicSimilarity], result of:
            0.021921717 = score(doc=2092,freq=14.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.31997898 = fieldWeight in 2092, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2092)
          0.041550554 = weight(_text_:22 in 2092) [ClassicSimilarity], result of:
            0.041550554 = score(doc=2092,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.2708308 = fieldWeight in 2092, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2092)
      0.33333334 = coord(1/3)
    
    Abstract
    The object of this literature review is to provide a historical perspective of R and D work in the area of Knowledge Organization (KO). This overview/summarization will provide information on major areas of KO. Journal articles published in core areas of KO: (Classification, Indexing, Thesauri and Taxonomies, Internet and Subject approach to information in the electronic era and Ontologies will be predominantly covered in this literature review. Coverage in this overview may not be completely exhaustive, but it succinctly showcases major developments in the area of KO. This review is a good source of additional reading material on KO apart from prescribed reading material on KO
    Date
    22. 6.2015 16:13:38
  2. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.01
    0.009892989 = product of:
      0.029678967 = sum of:
        0.029678967 = product of:
          0.059357934 = sum of:
            0.059357934 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.059357934 = score(doc=5576,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13.12.2017 14:17:22
  3. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.01
    0.007914391 = product of:
      0.023743173 = sum of:
        0.023743173 = product of:
          0.047486346 = sum of:
            0.047486346 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
              0.047486346 = score(doc=1484,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.30952093 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 9.2014 14:45:22
  4. Positionspapier zur Weiterentwicklung der Bibliotheksverbünde als Teil einer überregionalen Informationsinfrastruktur (2011) 0.00
    0.0049464945 = product of:
      0.014839483 = sum of:
        0.014839483 = product of:
          0.029678967 = sum of:
            0.029678967 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
              0.029678967 = score(doc=4291,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.19345059 = fieldWeight in 4291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4291)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    7. 2.2011 19:52:22
  5. Förderung von Informationsinfrastrukturen für die Wissenschaft : Ein Positionspapier der Deutschen Forschungsgemeinschaft (2018) 0.00
    0.0049464945 = product of:
      0.014839483 = sum of:
        0.014839483 = product of:
          0.029678967 = sum of:
            0.029678967 = weight(_text_:22 in 4178) [ClassicSimilarity], result of:
              0.029678967 = score(doc=4178,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.19345059 = fieldWeight in 4178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4178)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2018 17:30:43
  6. Wehling, E.: Framing-Manual : Unser gemeinsamer freier Rundfunk ARD (2019) 0.00
    0.0049464945 = product of:
      0.014839483 = sum of:
        0.014839483 = product of:
          0.029678967 = sum of:
            0.029678967 = weight(_text_:22 in 4997) [ClassicSimilarity], result of:
              0.029678967 = score(doc=4997,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.19345059 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.2019 9:26:20
  7. Eckert, K: ¬The ICE-map visualization (2011) 0.00
    0.004175565 = product of:
      0.012526695 = sum of:
        0.012526695 = product of:
          0.02505339 = sum of:
            0.02505339 = weight(_text_:of in 4743) [ClassicSimilarity], result of:
              0.02505339 = score(doc=4743,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.36569026 = fieldWeight in 4743, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4743)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper, we describe in detail the Information Content Evaluation Map (ICE-Map Visualization, formerly referred to as IC Difference Analysis). The ICE-Map Visualization is a visual data mining approach for all kinds of concept hierarchies that uses statistics about the concept usage to help a user in the evaluation and maintenance of the hierarchy. It consists of a statistical framework that employs the the notion of information content from information theory, as well as a visualization of the hierarchy and the result of the statistical analysis by means of a treemap.
  8. Hochschule im digitalen Zeitalter : Informationskompetenz neu begreifen - Prozesse anders steuern (2012) 0.00
    0.0039571957 = product of:
      0.011871587 = sum of:
        0.011871587 = product of:
          0.023743173 = sum of:
            0.023743173 = weight(_text_:22 in 506) [ClassicSimilarity], result of:
              0.023743173 = score(doc=506,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.15476047 = fieldWeight in 506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=506)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    8.12.2012 17:22:26
  9. British Library / FAST/Dewey Review Group: Consultation on subject indexing and classification standards applied by the British Library (2015) 0.00
    0.003743066 = product of:
      0.0112291975 = sum of:
        0.0112291975 = product of:
          0.022458395 = sum of:
            0.022458395 = weight(_text_:of in 2810) [ClassicSimilarity], result of:
              0.022458395 = score(doc=2810,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32781258 = fieldWeight in 2810, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2810)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A broad-based review of the subject and classification schemes used on British Library records began in late 2014. The review was undertaken in response to a number of drivers including: - An increasing demand on available resources due to the rapidly expanding digital publishing arena, and continuing steady state in print publication patterns - Increased demands on metadata to meet changing audience expectations.
    Content
    The Library is consulting with stakeholders concerning the potential impact of these proposals. No firm decisions have yet been taken regarding either of these standards. FAST 1. The British Library proposes to adopt FAST selectively to extend the scope of subject indexing of current and legacy content. 2. The British Library proposes to implement FAST as a replacement for LCSH in all current cataloguing, subject to mitigation of the risks identified above, in particular the question of sustainability. DDC 3. The British Library proposes to implement Abridged DDC selectively to extend the scope of subject indexing of current and legacy content.
  10. Gradmann, S.: Knowledge = Information in context : on the importance of semantic contextualisation in Europeana (2010) 0.00
    0.003701243 = product of:
      0.011103729 = sum of:
        0.011103729 = product of:
          0.022207458 = sum of:
            0.022207458 = weight(_text_:of in 3475) [ClassicSimilarity], result of:
              0.022207458 = score(doc=3475,freq=44.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3241498 = fieldWeight in 3475, product of:
                  6.6332498 = tf(freq=44.0), with freq of:
                    44.0 = termFreq=44.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3475)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    "Europeana.eu is about ideas and inspiration. It links you to 6 million digital items." This is the opening statement taken from the Europeana WWW-site (http://www.europeana.eu/portal/aboutus.html), and it clearly is concerned with the mission of Europeana - without, however, being over-explicit as to the precise nature of that mission. Europeana's current logo, too, has a programmatic aspect: the slogan "Think Culture" clearly again is related to Europeana's mission and at same time seems somewhat closer to the point: 'thinking' culture evokes notions like conceptualisation, reasoning, semantics and the like. Still, all this remains fragmentary and insufficient to actually clarify the functional scope and mission of Europeana. In fact, the author of the present contribution is convinced that Europeana has too often been described in terms of sheer quantity, as a high volume aggregation of digital representations of cultural heritage objects without sufficiently stressing the functional aspects of this endeavour. This conviction motivates the present contribution on some of the essential functional aspects of Europeana making clear that such a contribution - even if its author is deeply involved in building Europeana - should not be read as an official statement of the project or of the European Commission (which it is not!) - but as the personal statement from an information science perspective! From this perspective the opening statement is that Europeana is much more than a machine for mechanical accumulation of object representations but that one of its main characteristics should be to enable the generation of knowledge pertaining to cultural artefacts. The rest of the paper is about the implications of this initial statement in terms of information science, on the way we technically prepare to implement the necessary data structures and functionality and on the novel functionality Europeana will offer based on these elements and which go well beyond the 'traditional' digital library paradigm. However, prior to exploring these areas it may be useful to recall the notion of 'knowledge' that forms the basis of this contribution and which in turn is part of the well known continuum reaching from data via information and knowledge to wisdom.
  11. Kaytoue, M.; Kuznetsov, S.O.; Assaghir, Z.; Napoli, A.: Embedding tolerance relations in concept lattices : an application in information fusion (2010) 0.00
    0.0027899165 = product of:
      0.008369749 = sum of:
        0.008369749 = product of:
          0.016739499 = sum of:
            0.016739499 = weight(_text_:of in 4843) [ClassicSimilarity], result of:
              0.016739499 = score(doc=4843,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24433708 = fieldWeight in 4843, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4843)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Formal Concept Analysis (FCA) is a well founded mathematical framework used for conceptual classication and knowledge management. Given a binary table describing a relation between objects and attributes, FCA consists in building a set of concepts organized by a subsumption relation within a concept lattice. Accordingly, FCA requires to transform complex data, e.g. numbers, intervals, graphs, into binary data leading to loss of information and poor interpretability of object classes. In this paper, we propose a pre-processing method producing binary data from complex data taking advantage of similarity between objects. As a result, the concept lattice is composed of classes being maximal sets of pairwise similar objects. This method is based on FCA and on a formalization of similarity as a tolerance relation (reexive and symmetric). It applies to complex object descriptions and especially here to interval data. Moreover, it can be applied to any kind of structured data for which a similarity can be dened (sequences, graphs, etc.). Finally, an application highlights that the resulting concept lattice plays an important role in information fusion problem, as illustrated with a real-world example in agronomy.
  12. Horridge, M.; Brandt, S.: ¬A practical guide to building OWL ontologies using Protégé 4 and CO-ODE Tools (2011) 0.00
    0.0027335489 = product of:
      0.008200646 = sum of:
        0.008200646 = product of:
          0.016401293 = sum of:
            0.016401293 = weight(_text_:of in 4938) [ClassicSimilarity], result of:
              0.016401293 = score(doc=4938,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.23940048 = fieldWeight in 4938, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4938)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This guide introduces Protégé 4 for creating OWL ontologies. Chapter 3 gives a brief overview of the OWL ontology language. Chapter 4 focuses on building an OWL-DL ontology and using a Description Logic Reasoner to check the consistency of the ontology and automatically compute the ontology class hierarchy. Chapter 7 describes some OWL constructs such as hasValue Restrictions and Enumerated classes, which aren't directly used in the main tutorial.
    Imprint
    Manchester : University of Manchester
  13. Breeding, M.: Library systems report 2019 : cycles of innovation (2019) 0.00
    0.0027335489 = product of:
      0.008200646 = sum of:
        0.008200646 = product of:
          0.016401293 = sum of:
            0.016401293 = weight(_text_:of in 5988) [ClassicSimilarity], result of:
              0.016401293 = score(doc=5988,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.23940048 = fieldWeight in 5988, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5988)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The library technology industry, broadly speaking, shows more affinity toward utility than innovation. Library automation systems are not necessarily exciting technologies, but they are workhorse applications that must support the complex tasks of acquiring, describing, and providing access to materials and services. They represent substantial investments, and their effectiveness is tested daily in the library. But more than efficiency is at stake: These products must be aligned with the priorities of the library relative to collection management, service provision, and other functions.
  14. Knowledge graphs : new directions for knowledge representation on the Semantic Web (2019) 0.00
    0.002609728 = product of:
      0.007829184 = sum of:
        0.007829184 = product of:
          0.015658367 = sum of:
            0.015658367 = weight(_text_:of in 51) [ClassicSimilarity], result of:
              0.015658367 = score(doc=51,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.22855641 = fieldWeight in 51, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=51)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The increasingly pervasive nature of the Web, expanding to devices and things in everydaylife, along with new trends in Artificial Intelligence call for new paradigms and a new look onKnowledge Representation and Processing at scale for the Semantic Web. The emerging, but stillto be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphorfor this current status of Semantic Web research. More than two decades of Semantic Webresearch provides a solid basis and a promising technology and standards stack to interlink data,ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphsas such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises- while often inspired by - limited to the core Semantic Web stack. This report documents theprogram and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions forKnowledge Representation on the Semantic Web", where a group of experts from academia andindustry discussed fundamental questions around these topics for a week in early September 2018,including the following: what are knowledge graphs? Which applications do we see to emerge?Which open research questions still need be addressed and which technology gaps still need tobe closed?
  15. Riva, P.; Boeuf, P. le; Zumer, M.: IFLA Library Reference Model : a conceptual model for bibliographic information (2017) 0.00
    0.0023918552 = product of:
      0.0071755657 = sum of:
        0.0071755657 = product of:
          0.014351131 = sum of:
            0.014351131 = weight(_text_:of in 5179) [ClassicSimilarity], result of:
              0.014351131 = score(doc=5179,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20947541 = fieldWeight in 5179, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5179)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Definition of a conceptual reference model to provide a framework for the analysis of non-administrative metadata relating to library resources. The resulting model definition was approved by the FRBR Review Group (November 2016), and then made available to the Standing Committees of the Sections on Cataloguing and Subject Analysis & Access, as well as to the ISBD Review Group, for comment in December 2016. The final document was approved by the IFLACommittee on Standards (August 2017).
  16. Schönfelder, N.: Mittelbedarf für Open Access an ausgewählten deutschen Universitäten und Forschungseinrichtungen : Transformationsrechnung (2019) 0.00
    9.863845E-4 = product of:
      0.0029591534 = sum of:
        0.0029591534 = product of:
          0.0059183068 = sum of:
            0.0059183068 = weight(_text_:of in 5427) [ClassicSimilarity], result of:
              0.0059183068 = score(doc=5427,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.086386204 = fieldWeight in 5427, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5427)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Für fünf deutsche Universitäten sowie ein Forschungsinstitut werden auf Basis der Publikationsdaten des Web of Science Abschätzungen zu den Gesamtausgaben für APCs erstellt und mit den derzeitigen Subskriptionsausgaben verglichen. Der Bericht zeigt, dass die Kostenübernahme auf Basis der projizierten Ausgaben für Publikationen aus nicht-Drittmittel-geförderter Forschung für alle hier betrachteten Einrichtungen ohne Probleme aus den derzeitigen bibliothekarischen Erwerbungsetats für Zeitschriften bestritten werden könnte. Dies setzt jedoch voraus, dass Drittmittelgeber neben der üblichen Forschungsförderung auch für die APCs der aus diesen Projekten resultierenden Publikationen aufkommen. Trifft dies nicht zu und die wissenschaftliche Einrichtung muss für sämtliche Publikationen die APCs selbst tragen, so hängen die budgetären Auswirkungen wesentlich von der zukünftigen Entwicklung der Artikelbearbeitungsgebühren ab.
  17. AG KIM Gruppe Titeldaten DINI: Empfehlungen zur RDF-Repräsentation bibliografischer Daten (2014) 0.00
    7.8910764E-4 = product of:
      0.0023673228 = sum of:
        0.0023673228 = product of:
          0.0047346456 = sum of:
            0.0047346456 = weight(_text_:of in 4668) [ClassicSimilarity], result of:
              0.0047346456 = score(doc=4668,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.06910896 = fieldWeight in 4668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4668)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In den letzten Jahren wurde eine Vielzahl an Datensets aus Kultur- und Wissenschaftseinrichtungen als Linked Open Data veröffentlicht. Auch das deutsche Bibliothekswesen hat sich aktiv an den Entwicklungen im Bereich Linked Data beteiligt. Die zuvor lediglich in den Bibliothekskatalogen vorliegenden Daten können weiteren Sparten geöffnet und so auf vielfältige Weise in externe Anwendungen eingebunden werden. Gemeinsames Ziel bei der Veröffentlichung der Bibliotheksdaten als Linked Data ist außerdem, Interoperabilität und Nachnutzbarkeit zu ermöglichen und sich auf diese Weise stärker mit anderen Domänen außerhalb der Bibliothekswelt zu vernetzen. Es bestehen sowohl Linked-Data-Services einzelner Bibliotheken als auch der deutschen Bibliotheksverbünde. Trotz ihres gemeinsamen Ziels sprechen die bestehenden Services nicht die gleiche Sprache, da sie auf unterschiedlichen Modellierungen basieren. Um die Interoperabilität dieser Datenquellen zu gewährleisten, sollten die Dienste künftig einer einheitlichen Modellierung folgen. Vor diesem Hintergrund wurde im Januar 2012 eine Arbeitsgruppe gegründet, in der alle deutschen Bibliotheksverbünde, die Deutsche Nationalbibliothek sowie einige weitere interessierte und engagierte Kolleginnen und Kollegen mit entsprechender Expertise vertreten sind. Die Gruppe Titeldaten agiert seit April 2012 als Untergruppe des Kompetenzzentrums Interoperable Metadaten (DINI-AG KIM). Die Moderation und Koordination liegt bei der Deutschen Nationalbibliothek. Im Dezember 2012 schloss sich auch der OBVSG der Arbeitsgruppe an. Die Schweizerische Nationalbibliothek folgte im Mai 2013. Vorliegende Empfehlungen sollen zu einer Harmonisierung der RDFRepräsentationen von Titeldaten im deutschsprachigen Raum beitragen und so möglichst einen Quasi-Standard etablieren. Auch international wird an der Herausforderung gearbeitet, die bestehenden bibliothekarischen Strukturen in die heute zur Verfügung stehenden Konzepte des Semantic Web zu überführen und ihren Mehrwert auszuschöpfen. Die neuesten internationalen Entwicklungen im Bereich der Bereitstellung bibliografischer Daten im Semantic Web wie die Bibliographic Framework Transition Initiative der Library of Congress (BIBFRAME) haben ebenfalls das Ziel, ein Modell zur RDF-Repräsentation bibliothekarischer Daten bereitzustellen. Die Gruppe Titeldaten beobachtet diese Entwicklungen und beabsichtigt, die Erfahrungen und Anforderungen der deutschsprachigen Bibliothekswelt mit einzubringen. Dabei werden einerseits international erarbeitete Empfehlungen aufgegriffen und andererseits Impulse aus der nationalen Kooperation dort eingebracht. Die hier verwendeten Properties könnten z. B. als Grundlage für ein Mapping zu BIBFRAME dienen.