Search (5 results, page 1 of 1)

  • × author_ss:"Schreiber, G."
  • × type_ss:"a"
  1. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 4644) [ClassicSimilarity], result of:
              0.009471525 = score(doc=4644,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 4644, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Type
    a
  2. Boer, V. de; Wielemaker, J.; Gent, J. van; Hildebrand, M.; Isaac, A.; Ossenbruggen, J. van; Schreiber, G.: Supporting linked data production for cultural heritage institutes : the Amsterdam Museum case study (2012) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 265) [ClassicSimilarity], result of:
              0.00894975 = score(doc=265,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 265, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=265)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Within the cultural heritage field, proprietary metadata and vocabularies are being transformed into public Linked Data. These efforts have mostly been at the level of large-scale aggregators such as Europeana where the original data is abstracted to a common format and schema. Although this approach ensures a level of consistency and interoperability, the richness of the original data is lost in the process. In this paper, we present a transparent and interactive methodology for ingesting, converting and linking cultural heritage metadata into Linked Data. The methodology is designed to maintain the richness and detail of the original metadata. We introduce the XMLRDF conversion tool and describe how it is integrated in the ClioPatria semantic web toolkit. The methodology and the tools have been validated by converting the Amsterdam Museum metadata to a Linked Data version. In this way, the Amsterdam Museum became the first 'small' cultural heritage institution with a node in the Linked Data cloud.
    Type
    a
  3. Schreiber, G.: Issues in publishing and aligning Web vocabularies (2011) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 4809) [ClassicSimilarity], result of:
              0.008285859 = score(doc=4809,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 4809, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4809)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization systems (KOS), such as vocabularies, thesauri and subject headings, contain a wealth of knowledge, collected by dedicated experts over long periods of time. these knowledge sources are potentially of high value to Web applications. To make this possible we need methods to publish these systems and subsequently clarify their relationships, also called "alignments'. In this talk Guus discusses methodological issues in publishing and aligning classification systems on the Web. With regards to publication of Web vocabularies he explains the basic principles for building a SKOS version of a vocabulary and illustrates this with examples. In particular, he discusses how one should prevent information loss, i.e. constructing a SKOS version that contains all information contained in the original vocabulary model. The talk also examines the role of RDF and OWL in this process. Web vocabularies derive much of their added value from the links they can provide to other vocabularies. He explains the process of vocabulary alignment, including the choice of alignment technique. Particular attention is paid to an evaluation of the process: how can one assess the quality of the resulting alignment? Human evaluators often play an important role in this process. Guus concludes by showing some examples of how aligned Web vocabularies can be used to create added value to applications.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  4. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4640) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4640,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4640, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Type
    a
  5. Speel, P.-H.; Schreiber, G.; Van Joolingen, W.; Van Heijst, G.; Beijer, G.: Conceptual modeling for knowledge-based systems (2002) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4254) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4254,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4254, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4254)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we presented knowledge-based system (KBS) development as a specific way of software engineering. What makes conceptual modeling for KBSs unique? STRENGTHS Knowledge-based system development is a very complex process. The approach in this article to clearly separate conceptual modeling from software implementation, which makes the process of KBS development feasible and manageable in a business environment. In addition, the knowledge model resulting from conceptual modeling is a deliverable in itself. For example, in knowledge management, knowledge mapping is a popular area in which graphical, high-level, business-focused knowledge models are delivered. Another strength is the ease of maintenance. Separating models and program code makes the process of updating more flexible. Last but not least, methodological approaches to KBS development have been matured, which provide a professional basis. Knowledge engineers start working in a similar way, and as a result exchange of work is possible and project planning is better manageable. WEAKNESSES The separation of conceptual modeling and KBS software implementation will take more time initially. However, reuse of models and code may speed up the separate processes considerably. In addition, the separate phases need different expertise; knowledge engineers with specific analytical skills should be assigned to conceptual modeling, whereas software engineers with specific programming skills should be assigned to software implementation. Finally, the various methodological approaches lack mature support tools. OPPORTUNITIES Reuse of various knowledge models as well as program code may bring several advantages, in improved quality of the KBSs as well as in speed to market. Separate development of conceptual vocabularies (also called dictionaries or ontologies), corporate memories, and generic domain models in an explicit form may form the basis of effective management of business-critical knowledge domains, which leads to sustainable competitive advantage.
    Type
    a