Search (31 results, page 1 of 2)

  • × theme_ss:"Semantische Interoperabilität"
  • × year_i:[2010 TO 2020}
  1. Social tagging in a linked data environment. Edited by Diane Rasmussen Pennington and Louise F. Spiteri. London, UK: Facet Publishing, 2018. 240 pp. £74.95 (paperback). (ISBN 9781783303380) (2019) 0.09
    0.08770707 = product of:
      0.13156061 = sum of:
        0.08476039 = weight(_text_:electronic in 101) [ClassicSimilarity], result of:
          0.08476039 = score(doc=101,freq=8.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.43194336 = fieldWeight in 101, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0390625 = fieldNorm(doc=101)
        0.046800215 = product of:
          0.09360043 = sum of:
            0.09360043 = weight(_text_:publishing in 101) [ClassicSimilarity], result of:
              0.09360043 = score(doc=101,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38169086 = fieldWeight in 101, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=101)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    London, UK : Facet Publishing
    LCSH
    Libraries and museums / Electronic information resources
    Electronic information resources
    Subject
    Libraries and museums / Electronic information resources
    Electronic information resources
  2. Hooland, S. van; Verborgh, R.: Linked data for Lilibraries, archives and museums : how to clean, link, and publish your metadata (2014) 0.08
    0.07577532 = product of:
      0.11366297 = sum of:
        0.06780831 = weight(_text_:electronic in 5153) [ClassicSimilarity], result of:
          0.06780831 = score(doc=5153,freq=8.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.34555468 = fieldWeight in 5153, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.03125 = fieldNorm(doc=5153)
        0.04585466 = product of:
          0.09170932 = sum of:
            0.09170932 = weight(_text_:publishing in 5153) [ClassicSimilarity], result of:
              0.09170932 = score(doc=5153,freq=6.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.37397915 = fieldWeight in 5153, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5153)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited resources. Readers will learn how to critically assess and use (semi-)automated methods of managing metadata through hands-on exercises within the book and on the accompanying website. Each chapter is built around a case study from institutions around the world, demonstrating how freely available tools are being successfully used in different metadata contexts. This handbook delivers the necessary conceptual and practical understanding to empower practitioners to make the right decisions when making their organisations resources accessible on the Web. Key topics include, the value of metadata; metadata creation - architecture, data models and standards; metadata cleaning; metadata reconciliation; metadata enrichment through Linked Data and named-entity recognition; importing and exporting metadata; ensuring a sustainable publishing model. This will be an invaluable guide for metadata practitioners and researchers within all cultural heritage contexts, from library cataloguers and archivists to museum curatorial staff. It will also be of interest to students and academics within information science and digital humanities fields. IT managers with responsibility for information systems, as well as strategy heads and budget holders, at cultural heritage organisations, will find this a valuable decision-making aid.
    Imprint
    London : Facet Publishing
    LCSH
    Libraries and museums / Electronic information resources
    Archives / Electronic information resources
    Subject
    Libraries and museums / Electronic information resources
    Archives / Electronic information resources
  3. Gemberling, T.: Thema and FRBR's third group (2010) 0.03
    0.033904158 = product of:
      0.101712465 = sum of:
        0.101712465 = weight(_text_:electronic in 4158) [ClassicSimilarity], result of:
          0.101712465 = score(doc=4158,freq=8.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.518332 = fieldWeight in 4158, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=4158)
      0.33333334 = coord(1/3)
    
    Abstract
    The treatment of subjects by Functional Requirements for Bibliographic Records (FRBR) has attracted less attention than some of its other aspects, but there seems to be a general consensus that it needs work. While some have proposed elaborating its subject categories-concepts, objects, events, and places-to increase their semantic complexity, a working group of the International Federation of Library Associations and Institutions (IFLA) has recently made a promising proposal that essentially bypasses those categories in favor of one entity, thema. This article gives an overview of the proposal and discusses its relevance to another difficult problem, ambiguities in the establishment of headings for buildings.Use of dynamic links from subject-based finding aids to records for electronic resources in the OPAC is suggested as one method for by-passing the OPAC search interface, thus making the library's electronic resources more accessible. This method simplifies maintenance of links to electronic resources and aids instruction by providing a single, consistent access point to them. Results of a usage study from before and after this project was completed show a consistent, often dramatic increase in use of the library's electronic resources.
  4. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.02
    0.021840101 = product of:
      0.0655203 = sum of:
        0.0655203 = product of:
          0.1310406 = sum of:
            0.1310406 = weight(_text_:publishing in 604) [ClassicSimilarity], result of:
              0.1310406 = score(doc=604,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.5343672 = fieldWeight in 604, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=604)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  5. Latif, A.: Understanding linked open data : for linked data discovery, consumption, triplification and application development (2011) 0.02
    0.018720087 = product of:
      0.056160256 = sum of:
        0.056160256 = product of:
          0.11232051 = sum of:
            0.11232051 = weight(_text_:publishing in 128) [ClassicSimilarity], result of:
              0.11232051 = score(doc=128,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.45802903 = fieldWeight in 128, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.046875 = fieldNorm(doc=128)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Linked Open Data initiative has played a vital role in the realization of the Semantic Web at a global scale by publishing and interlinking diverse data sources on the Web. Access to this huge amount of Linked Data presents exciting benefits and opportunities. However, the inherent complexity attached to Linked Data understanding, lack of potential use cases and applications which can consume Linked Data hinders its full exploitation by naïve web users and developers. This book aims to address these core limitations of Linked Open Data and contributes by presenting: (i) Conceptual model for fundamental understanding of Linked Open Data sphere, (ii) Linked Data application to search, consume and aggregate various Linked Data resources, (iii) Semantification and interlinking technique for conversion of legacy data, and (iv) Potential application areas of Linked Open Data.
    Imprint
    Saarbrücken : LAP Lambert Academic Publishing
  6. Schleipen, M.: Adaptivität und semantische Interoperabilität von Manufacturing Execution Systemen (MES) (2013) 0.02
    0.017649466 = product of:
      0.052948397 = sum of:
        0.052948397 = product of:
          0.10589679 = sum of:
            0.10589679 = weight(_text_:publishing in 1276) [ClassicSimilarity], result of:
              0.10589679 = score(doc=1276,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.4318339 = fieldWeight in 1276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1276)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Karlsruhe : KIT Scientific Publishing
  7. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.02
    0.017649466 = product of:
      0.052948397 = sum of:
        0.052948397 = product of:
          0.10589679 = sum of:
            0.10589679 = weight(_text_:publishing in 3926) [ClassicSimilarity], result of:
              0.10589679 = score(doc=3926,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.4318339 = fieldWeight in 3926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing
  8. Lange, C.; Mossakowski, T.; Galinski, C.; Kutz, O.: Making heterogeneous ontologies interoperable through standardisation : a Meta Ontology Language to be standardised: Ontology Integration and Interoperability (OntoIOp) (2011) 0.02
    0.016952079 = product of:
      0.050856233 = sum of:
        0.050856233 = weight(_text_:electronic in 50) [ClassicSimilarity], result of:
          0.050856233 = score(doc=50,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.259166 = fieldWeight in 50, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=50)
      0.33333334 = coord(1/3)
    
    Abstract
    Assistive technology, especially for persons with disabilities, increasingly relies on electronic communication among users, between users and their devices, and among these devices. Making such ICT accessible and inclusive often requires remedial programming, which tends to be costly or even impossible. We, therefore, aim at more interoperable devices, services accessing these devices, and content delivered by these services, at the levels of 1. data and metadata, 2. datamodels and data modelling methods and 3. metamodels as well as a meta ontology language. Even though ontologies are widely being used to enable content interoperability, there is currently no unified framework for ontology interoperability itself. This paper outlines the design considerations underlying OntoIOp (Ontology Integration and Interoperability), a new standardisation activity in ISO/TC 37/SC 3 to become an international standard, which aims at filling this gap.
  9. Baca, M.; Gill, M.: Encoding multilingual knowledge systems in the digital age : the Getty vocabularies (2015) 0.02
    0.016952079 = product of:
      0.050856233 = sum of:
        0.050856233 = weight(_text_:electronic in 2203) [ClassicSimilarity], result of:
          0.050856233 = score(doc=2203,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.259166 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper gives an overview of the history, development, and structure of the electronic thesauri produced and maintained by the Getty Research Institute (GRI). We describe the evolution of the Art & Architecture Thesaurus (AAT®), the Getty Thesaurus of Geographic Names (TGN®), and the Union List of Artist Names (ULAN®) as multilingual, cross-cultural knowledge organization systems (KOS); the factors that make them unique; and their potential, when expressed as Linked Open Data (LOD) to play a key role in the Semantic Web.
  10. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.01586778 = product of:
      0.047603343 = sum of:
        0.047603343 = product of:
          0.095206685 = sum of:
            0.095206685 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.095206685 = score(doc=8365,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2015 16:08:38
  11. Ioannou, E.; Nejdl, W.; Niederée, C.; Velegrakis, Y.: Embracing uncertainty in entity linking (2012) 0.02
    0.015600072 = product of:
      0.046800215 = sum of:
        0.046800215 = product of:
          0.09360043 = sum of:
            0.09360043 = weight(_text_:publishing in 433) [ClassicSimilarity], result of:
              0.09360043 = score(doc=433,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38169086 = fieldWeight in 433, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=433)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The modern Web has grown from a publishing place of well-structured data and HTML pages for companies and experienced users into a vivid publishing and data exchange community in which everyone can participate, both as a data consumer and as a data producer. Unavoidably, the data available on the Web became highly heterogeneous, ranging from highly structured and semistructured to highly unstructured user-generated content, reflecting different perspectives and structuring principles. The full potential of such data can only be realized by combining information from multiple sources. For instance, the knowledge that is typically embedded in monolithic applications can be outsourced and, thus, used also in other applications. Numerous systems nowadays are already actively utilizing existing content from various sources such as WordNet or Wikipedia. Some well-known examples of such systems include DBpedia, Freebase, Spock, and DBLife. A major challenge during combining and querying information from multiple heterogeneous sources is entity linkage, i.e., the ability to detect whether two pieces of information correspond to the same real-world object. This chapter introduces a novel approach for addressing the entity linkage problem for heterogeneous, uncertain, and volatile data.
  12. Neumaier, S.: Data integration for open data on the Web (2017) 0.02
    0.015600072 = product of:
      0.046800215 = sum of:
        0.046800215 = product of:
          0.09360043 = sum of:
            0.09360043 = weight(_text_:publishing in 3923) [ClassicSimilarity], result of:
              0.09360043 = score(doc=3923,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38169086 = fieldWeight in 3923, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3923)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help.
    Imprint
    Cham : Springer International Publishing
  13. Celli, F. et al.: Enabling multilingual search through controlled vocabularies : the AGRIS approach (2016) 0.01
    0.011334131 = product of:
      0.03400239 = sum of:
        0.03400239 = product of:
          0.06800478 = sum of:
            0.06800478 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
              0.06800478 = score(doc=3278,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38690117 = fieldWeight in 3278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3278)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  14. Kless, D.; Lindenthal, J.; Milton, S.; Kazmierczak, E.: Interoperability of knowledge organization systems with and through ontologies (2011) 0.01
    0.011030916 = product of:
      0.03309275 = sum of:
        0.03309275 = product of:
          0.0661855 = sum of:
            0.0661855 = weight(_text_:publishing in 4814) [ClassicSimilarity], result of:
              0.0661855 = score(doc=4814,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.26989618 = fieldWeight in 4814, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4814)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ontologies are increasingly seen as a new type of knowledge organization system (KOS) besides traditional ones such as classification schemes or thesauri. Consequently, there are efforts to compare them with and map them to other KOS. This paper argues that only ontologies for reality representation are useful subjects of such comparisons and mappings. These ontologies are difficult to distinguish from other "data modelling" - types of ontology, since both can be represented through the popular Web Ontology Language (OWL). Data modelling ontologies such as Simple Knowledge Organization Systems (SKOS) are useful instruments for establishing interoperability between KOS in the sense of publishing and accessing data and data models in a uniform way as well as for relating them to each other. Discriminating these two understandings of ontologies particularly supports comparisons and mappings between traditional KOS and ontologies. In practice, such efforts are still impeded by the absence of standards or guidelines for vocabulary control in ontologies. Moreover, this paper emphasizes that methods for constructing and evaluating reality representation ontologies can be useful to re-engineer traditional KOS. This makes them become more interoperable in the sense of combinable, but also more useful in the sense of improving search expansion results and reusable for different purposes.
  15. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.01
    0.011030916 = product of:
      0.03309275 = sum of:
        0.03309275 = product of:
          0.0661855 = sum of:
            0.0661855 = weight(_text_:publishing in 3934) [ClassicSimilarity], result of:
              0.0661855 = score(doc=3934,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.26989618 = fieldWeight in 3934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3934)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing
  16. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.009617329 = product of:
      0.028851984 = sum of:
        0.028851984 = product of:
          0.05770397 = sum of:
            0.05770397 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.05770397 = score(doc=4379,freq=4.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  17. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.009617329 = product of:
      0.028851984 = sum of:
        0.028851984 = product of:
          0.05770397 = sum of:
            0.05770397 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.05770397 = score(doc=1967,freq=4.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  18. Sakr, S.; Wylot, M.; Mutharaju, R.; Le-Phuoc, D.; Fundulaki, I.: Linked data : storing, querying, and reasoning (2018) 0.01
    0.008824733 = product of:
      0.026474198 = sum of:
        0.026474198 = product of:
          0.052948397 = sum of:
            0.052948397 = weight(_text_:publishing in 5329) [ClassicSimilarity], result of:
              0.052948397 = score(doc=5329,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.21591695 = fieldWeight in 5329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5329)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing
  19. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.01
    0.00801444 = product of:
      0.024043318 = sum of:
        0.024043318 = product of:
          0.048086636 = sum of:
            0.048086636 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
              0.048086636 = score(doc=1962,freq=4.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.27358043 = fieldWeight in 1962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1962)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  20. Petras, V.: Heterogenitätsbehandlung und Terminology Mapping durch Crosskonkordanzen : eine Fallstudie (2010) 0.01
    0.00793389 = product of:
      0.023801671 = sum of:
        0.023801671 = product of:
          0.047603343 = sum of:
            0.047603343 = weight(_text_:22 in 3730) [ClassicSimilarity], result of:
              0.047603343 = score(doc=3730,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.2708308 = fieldWeight in 3730, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3730)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly

Languages

  • e 23
  • d 7

Types