Search (9 results, page 1 of 1)

  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  1. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.01
    0.0077447225 = product of:
      0.054213054 = sum of:
        0.036238287 = weight(_text_:web in 4642) [ClassicSimilarity], result of:
          0.036238287 = score(doc=4642,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 4642, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.01797477 = weight(_text_:retrieval in 4642) [ClassicSimilarity], result of:
          0.01797477 = score(doc=4642,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
      0.14285715 = coord(2/14)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
  2. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.01
    0.005457378 = product of:
      0.038201645 = sum of:
        0.03416578 = weight(_text_:web in 4639) [ClassicSimilarity], result of:
          0.03416578 = score(doc=4639,freq=12.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35328537 = fieldWeight in 4639, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0040358636 = weight(_text_:information in 4639) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=4639,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
      0.14285715 = coord(2/14)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
  3. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.00
    0.0044959965 = product of:
      0.031471975 = sum of:
        0.024409214 = weight(_text_:web in 604) [ClassicSimilarity], result of:
          0.024409214 = score(doc=604,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.0070627616 = weight(_text_:information in 604) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=604,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.14285715 = coord(2/14)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
    Source
    Proceedings of the Sixth Workshop on Scripting and Development for the Semantic Web, Crete, Greece, May 31, 2010, CEUR Workshop Proceedings, SFSW - http://ceur-ws.org/Vol-699/Paper2.pdf
  4. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.00
    0.0044226884 = product of:
      0.030958816 = sum of:
        0.009988253 = weight(_text_:information in 572) [ClassicSimilarity], result of:
          0.009988253 = score(doc=572,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1920054 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.020970564 = weight(_text_:retrieval in 572) [ClassicSimilarity], result of:
          0.020970564 = score(doc=572,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23394634 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.14285715 = coord(2/14)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  5. Quick Guide to Publishing a Thesaurus on the Semantic Web (2008) 0.00
    0.0030198572 = product of:
      0.042278 = sum of:
        0.042278 = weight(_text_:web in 4656) [ClassicSimilarity], result of:
          0.042278 = score(doc=4656,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43716836 = fieldWeight in 4656, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4656)
      0.071428575 = coord(1/14)
    
    Abstract
    This document describes in brief how to express the content and structure of a thesaurus, and metadata about a thesaurus, in RDF. Using RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.
  6. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.00
    0.0017435154 = product of:
      0.024409214 = sum of:
        0.024409214 = weight(_text_:web in 4644) [ClassicSimilarity], result of:
          0.024409214 = score(doc=4644,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
      0.071428575 = coord(1/14)
    
    Source
    Proceedings of the 3rd International Semantic Web Conference (ISWC'04). Eds. D. Plexousakis and F. van Harmelen
  7. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.00
    9.5593097E-4 = product of:
      0.013383033 = sum of:
        0.013383033 = product of:
          0.040149096 = sum of:
            0.040149096 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.040149096 = score(doc=539,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    26.12.2011 13:22:07
  8. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.00
    7.489488E-4 = product of:
      0.010485282 = sum of:
        0.010485282 = weight(_text_:retrieval in 2362) [ClassicSimilarity], result of:
          0.010485282 = score(doc=2362,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.11697317 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.071428575 = coord(1/14)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
  9. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.00
    4.32414E-4 = product of:
      0.0060537956 = sum of:
        0.0060537956 = weight(_text_:information in 4641) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=4641,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 4641, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.