Search (11 results, page 1 of 1)

  • × author_ss:"Schreiber, G."
  1. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Hollink, L.; Huang, Z.; Kersen, J. van; Niet, M. de; Omelayenko, B.; Ossenbruggen, J. van; Siebes, R.; Taekema, J.; Wielemaker, J.; Wielinga, B.: MultimediaN E-Culture demonstrator (2006) 0.03
    0.02872987 = product of:
      0.043094806 = sum of:
        0.0039819763 = weight(_text_:a in 4648) [ClassicSimilarity], result of:
          0.0039819763 = score(doc=4648,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.07643694 = fieldWeight in 4648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4648)
        0.03911283 = product of:
          0.07822566 = sum of:
            0.07822566 = weight(_text_:de in 4648) [ClassicSimilarity], result of:
              0.07822566 = score(doc=4648,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.4028896 = fieldWeight in 4648, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4648)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  2. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Omelayenko, B.; Ossenbruggen, J. van; Wielemaker, J.; Wielinga, B.; Tordai, A.; Aroyoa, L.: Semantic annotation and search of cultural-heritage collections : the MultimediaN E-Culture demonstrator (2008) 0.02
    0.023035955 = product of:
      0.03455393 = sum of:
        0.006896985 = weight(_text_:a in 4646) [ClassicSimilarity], result of:
          0.006896985 = score(doc=4646,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.13239266 = fieldWeight in 4646, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4646)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 4646) [ClassicSimilarity], result of:
              0.055313893 = score(doc=4646,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 4646, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4646)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article we describe a SemanticWeb application for semantic annotation and search in large virtual collections of cultural-heritage objects, indexed with multiple vocabularies. During the annotation phase we harvest, enrich and align collection metadata and vocabularies. The semantic-search facilities support keyword-based queries of the graph (currently 20M triples), resulting in semantically grouped result clusters, all representing potential semantic matches of the original query. We show two sample search scenario's. The annotation and search software is open source and is already being used by third parties. All software is based on establishedWeb standards, in particular HTML/XML, CSS, RDF/OWL, SPARQL and JavaScript.
  3. Boer, V. de; Wielemaker, J.; Gent, J. van; Hildebrand, M.; Isaac, A.; Ossenbruggen, J. van; Schreiber, G.: Supporting linked data production for cultural heritage institutes : the Amsterdam Museum case study (2012) 0.02
    0.021217927 = product of:
      0.03182689 = sum of:
        0.008779433 = weight(_text_:a in 265) [ClassicSimilarity], result of:
          0.008779433 = score(doc=265,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1685276 = fieldWeight in 265, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=265)
        0.023047457 = product of:
          0.046094913 = sum of:
            0.046094913 = weight(_text_:de in 265) [ClassicSimilarity], result of:
              0.046094913 = score(doc=265,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23740499 = fieldWeight in 265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=265)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Within the cultural heritage field, proprietary metadata and vocabularies are being transformed into public Linked Data. These efforts have mostly been at the level of large-scale aggregators such as Europeana where the original data is abstracted to a common format and schema. Although this approach ensures a level of consistency and interoperability, the richness of the original data is lost in the process. In this paper, we present a transparent and interactive methodology for ingesting, converting and linking cultural heritage metadata into Linked Data. The methodology is designed to maintain the richness and detail of the original metadata. We introduce the XMLRDF conversion tool and describe how it is integrated in the ClioPatria semantic web toolkit. The methodology and the tools have been validated by converting the Amsterdam Museum metadata to a Linked Data version. In this way, the Amsterdam Museum became the first 'small' cultural heritage institution with a node in the Linked Data cloud.
    Type
    a
  4. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.00
    0.0035117732 = product of:
      0.010535319 = sum of:
        0.010535319 = weight(_text_:a in 4642) [ClassicSimilarity], result of:
          0.010535319 = score(doc=4642,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20223314 = fieldWeight in 4642, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
      0.33333334 = coord(1/3)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
  5. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.00
    0.00325127 = product of:
      0.009753809 = sum of:
        0.009753809 = weight(_text_:a in 4641) [ClassicSimilarity], result of:
          0.009753809 = score(doc=4641,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 4641, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
  6. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 4644) [ClassicSimilarity], result of:
          0.009291277 = score(doc=4644,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 4644, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Type
    a
  7. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.00
    0.00296799 = product of:
      0.00890397 = sum of:
        0.00890397 = weight(_text_:a in 4645) [ClassicSimilarity], result of:
          0.00890397 = score(doc=4645,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1709182 = fieldWeight in 4645, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
      0.33333334 = coord(1/3)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  8. Schreiber, G.: Issues in publishing and aligning Web vocabularies (2011) 0.00
    0.0027093915 = product of:
      0.008128175 = sum of:
        0.008128175 = weight(_text_:a in 4809) [ClassicSimilarity], result of:
          0.008128175 = score(doc=4809,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15602624 = fieldWeight in 4809, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4809)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge organization systems (KOS), such as vocabularies, thesauri and subject headings, contain a wealth of knowledge, collected by dedicated experts over long periods of time. these knowledge sources are potentially of high value to Web applications. To make this possible we need methods to publish these systems and subsequently clarify their relationships, also called "alignments'. In this talk Guus discusses methodological issues in publishing and aligning classification systems on the Web. With regards to publication of Web vocabularies he explains the basic principles for building a SKOS version of a vocabulary and illustrates this with examples. In particular, he discusses how one should prevent information loss, i.e. constructing a SKOS version that contains all information contained in the original vocabulary model. The talk also examines the role of RDF and OWL in this process. Web vocabularies derive much of their added value from the links they can provide to other vocabularies. He explains the process of vocabulary alignment, including the choice of alignment technique. Particular attention is paid to an evaluation of the process: how can one assess the quality of the resulting alignment? Human evaluators often play an important role in this process. Guus concludes by showing some examples of how aligned Web vocabularies can be used to create added value to applications.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  9. Speel, P.-H.; Schreiber, G.; Van Joolingen, W.; Van Heijst, G.; Beijer, G.: Conceptual modeling for knowledge-based systems (2002) 0.00
    0.002654651 = product of:
      0.007963953 = sum of:
        0.007963953 = weight(_text_:a in 4254) [ClassicSimilarity], result of:
          0.007963953 = score(doc=4254,freq=18.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 4254, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4254)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we presented knowledge-based system (KBS) development as a specific way of software engineering. What makes conceptual modeling for KBSs unique? STRENGTHS Knowledge-based system development is a very complex process. The approach in this article to clearly separate conceptual modeling from software implementation, which makes the process of KBS development feasible and manageable in a business environment. In addition, the knowledge model resulting from conceptual modeling is a deliverable in itself. For example, in knowledge management, knowledge mapping is a popular area in which graphical, high-level, business-focused knowledge models are delivered. Another strength is the ease of maintenance. Separating models and program code makes the process of updating more flexible. Last but not least, methodological approaches to KBS development have been matured, which provide a professional basis. Knowledge engineers start working in a similar way, and as a result exchange of work is possible and project planning is better manageable. WEAKNESSES The separation of conceptual modeling and KBS software implementation will take more time initially. However, reuse of models and code may speed up the separate processes considerably. In addition, the separate phases need different expertise; knowledge engineers with specific analytical skills should be assigned to conceptual modeling, whereas software engineers with specific programming skills should be assigned to software implementation. Finally, the various methodological approaches lack mature support tools. OPPORTUNITIES Reuse of various knowledge models as well as program code may bring several advantages, in improved quality of the KBSs as well as in speed to market. Separate development of conceptual vocabularies (also called dictionaries or ontologies), corporate memories, and generic domain models in an explicit form may form the basis of effective management of business-critical knowledge domains, which leads to sustainable competitive advantage.
    Type
    a
  10. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.00
    0.002654651 = product of:
      0.007963953 = sum of:
        0.007963953 = weight(_text_:a in 4640) [ClassicSimilarity], result of:
          0.007963953 = score(doc=4640,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 4640, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
      0.33333334 = coord(1/3)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Type
    a
  11. Schreiber, G.: Principles and pragmatics of a Semantic Culture Web : tearing down walls and building bridges (2008) 0.00
    0.002212209 = product of:
      0.0066366266 = sum of:
        0.0066366266 = weight(_text_:a in 3764) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=3764,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 3764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3764)
      0.33333334 = coord(1/3)