Search (9 results, page 1 of 1)

  • × author_ss:"Schreiber, G."
  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.01
    0.0091349725 = product of:
      0.027404916 = sum of:
        0.010820055 = weight(_text_:in in 4644) [ClassicSimilarity], result of:
          0.010820055 = score(doc=4644,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 4644, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.01658486 = weight(_text_:und in 4644) [ClassicSimilarity], result of:
          0.01658486 = score(doc=4644,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Series
    Lecture notes in computer science; no.3298
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.01
    0.0091104945 = product of:
      0.027331483 = sum of:
        0.013115887 = weight(_text_:in in 4641) [ClassicSimilarity], result of:
          0.013115887 = score(doc=4641,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 4641, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.014215595 = weight(_text_:und in 4641) [ClassicSimilarity], result of:
          0.014215595 = score(doc=4641,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 4641, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.01
    0.007262686 = product of:
      0.021788057 = sum of:
        0.0075724614 = weight(_text_:in in 4642) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=4642,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 4642, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.014215595 = weight(_text_:und in 4642) [ClassicSimilarity], result of:
          0.014215595 = score(doc=4642,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
      0.33333334 = coord(2/6)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Schreiber, G.: Proposals for principles of knowledge engineering in the 21st century (2009) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 1312) [ClassicSimilarity], result of:
          0.012493922 = score(doc=1312,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 1312, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=1312)
      0.16666667 = coord(1/6)
    
  5. Speel, P.-H.; Schreiber, G.; Van Joolingen, W.; Van Heijst, G.; Beijer, G.: Conceptual modeling for knowledge-based systems (2002) 0.00
    0.0020609628 = product of:
      0.012365777 = sum of:
        0.012365777 = weight(_text_:in in 4254) [ClassicSimilarity], result of:
          0.012365777 = score(doc=4254,freq=24.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 4254, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4254)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article, we presented knowledge-based system (KBS) development as a specific way of software engineering. What makes conceptual modeling for KBSs unique? STRENGTHS Knowledge-based system development is a very complex process. The approach in this article to clearly separate conceptual modeling from software implementation, which makes the process of KBS development feasible and manageable in a business environment. In addition, the knowledge model resulting from conceptual modeling is a deliverable in itself. For example, in knowledge management, knowledge mapping is a popular area in which graphical, high-level, business-focused knowledge models are delivered. Another strength is the ease of maintenance. Separating models and program code makes the process of updating more flexible. Last but not least, methodological approaches to KBS development have been matured, which provide a professional basis. Knowledge engineers start working in a similar way, and as a result exchange of work is possible and project planning is better manageable. WEAKNESSES The separation of conceptual modeling and KBS software implementation will take more time initially. However, reuse of models and code may speed up the separate processes considerably. In addition, the separate phases need different expertise; knowledge engineers with specific analytical skills should be assigned to conceptual modeling, whereas software engineers with specific programming skills should be assigned to software implementation. Finally, the various methodological approaches lack mature support tools. OPPORTUNITIES Reuse of various knowledge models as well as program code may bring several advantages, in improved quality of the KBSs as well as in speed to market. Separate development of conceptual vocabularies (also called dictionaries or ontologies), corporate memories, and generic domain models in an explicit form may form the basis of effective management of business-critical knowledge domains, which leads to sustainable competitive advantage.
  6. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 4645) [ClassicSimilarity], result of:
          0.011973113 = score(doc=4645,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 4645, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
      0.16666667 = coord(1/6)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  7. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Hollink, L.; Huang, Z.; Kersen, J. van; Niet, M. de; Omelayenko, B.; Ossenbruggen, J. van; Siebes, R.; Taekema, J.; Wielemaker, J.; Wielinga, B.: MultimediaN E-Culture demonstrator (2006) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 4648) [ClassicSimilarity], result of:
          0.011973113 = score(doc=4648,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 4648, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4648)
      0.16666667 = coord(1/6)
    
    Abstract
    The main objective of the MultimediaN E-Culture project is to demonstrate how novel semantic-web and presentation technologies can be deployed to provide better indexing and search support within large virtual collections of culturalheritage resources. The architecture is fully based on open web standards in particular XML, SVG, RDF/OWL and SPARQL. One basic hypothesis underlying this work is that the use of explicit background knowledge in the form of ontologies/vocabularies/thesauri is in particular useful in information retrieval in knowledge-rich domains. This paper gives some details about the internals of the demonstrator.
  8. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Omelayenko, B.; Ossenbruggen, J. van; Wielemaker, J.; Wielinga, B.; Tordai, A.; Aroyoa, L.: Semantic annotation and search of cultural-heritage collections : the MultimediaN E-Culture demonstrator (2008) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 4646) [ClassicSimilarity], result of:
          0.010709076 = score(doc=4646,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 4646, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4646)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article we describe a SemanticWeb application for semantic annotation and search in large virtual collections of cultural-heritage objects, indexed with multiple vocabularies. During the annotation phase we harvest, enrich and align collection metadata and vocabularies. The semantic-search facilities support keyword-based queries of the graph (currently 20M triples), resulting in semantically grouped result clusters, all representing potential semantic matches of the original query. We show two sample search scenario's. The annotation and search software is open source and is already being used by third parties. All software is based on establishedWeb standards, in particular HTML/XML, CSS, RDF/OWL, SPARQL and JavaScript.
  9. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 4640) [ClassicSimilarity], result of:
          0.009274333 = score(doc=4640,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 4640, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.