Search (111 results, page 1 of 6)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.10
    0.09947254 = product of:
      0.24868134 = sum of:
        0.062170334 = product of:
          0.186511 = sum of:
            0.186511 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.186511 = score(doc=400,freq=2.0), product of:
                0.33185944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039143547 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.186511 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.186511 = score(doc=400,freq=2.0), product of:
            0.33185944 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039143547 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.4 = coord(2/5)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.09
    0.08691646 = product of:
      0.21729115 = sum of:
        0.041446887 = product of:
          0.12434066 = sum of:
            0.12434066 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12434066 = score(doc=5820,freq=2.0), product of:
                0.33185944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039143547 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17584425 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17584425 = score(doc=5820,freq=4.0), product of:
            0.33185944 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039143547 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.4 = coord(2/5)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.08
    0.08261948 = product of:
      0.20654869 = sum of:
        0.19049403 = weight(_text_:conversion in 4641) [ClassicSimilarity], result of:
          0.19049403 = score(doc=4641,freq=8.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.80325556 = fieldWeight in 4641, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.016054653 = product of:
          0.032109305 = sum of:
            0.032109305 = weight(_text_:29 in 4641) [ClassicSimilarity], result of:
              0.032109305 = score(doc=4641,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.23319192 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
    Date
    29. 7.2011 14:44:56
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.07
    0.06631502 = product of:
      0.16578755 = sum of:
        0.041446887 = product of:
          0.12434066 = sum of:
            0.12434066 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12434066 = score(doc=701,freq=2.0), product of:
                0.33185944 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039143547 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12434066 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12434066 = score(doc=701,freq=2.0), product of:
            0.33185944 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039143547 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.06
    0.06030171 = product of:
      0.15075427 = sum of:
        0.13469961 = weight(_text_:conversion in 4642) [ClassicSimilarity], result of:
          0.13469961 = score(doc=4642,freq=4.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.56798744 = fieldWeight in 4642, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.016054653 = product of:
          0.032109305 = sum of:
            0.032109305 = weight(_text_:29 in 4642) [ClassicSimilarity], result of:
              0.032109305 = score(doc=4642,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.23319192 = fieldWeight in 4642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4642)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
    Date
    29. 7.2011 14:44:56
  6. Maculan, B.C.M. dos; Lima, G.A. de; Oliveira, E.D.: Conversion methods from thesaurus to ontologies : a review (2016) 0.06
    0.059360888 = product of:
      0.14840221 = sum of:
        0.12699601 = weight(_text_:conversion in 4695) [ClassicSimilarity], result of:
          0.12699601 = score(doc=4695,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.5355037 = fieldWeight in 4695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0625 = fieldNorm(doc=4695)
        0.021406204 = product of:
          0.042812407 = sum of:
            0.042812407 = weight(_text_:29 in 4695) [ClassicSimilarity], result of:
              0.042812407 = score(doc=4695,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.31092256 = fieldWeight in 4695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4695)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
  7. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.05
    0.051940776 = product of:
      0.12985194 = sum of:
        0.11112151 = weight(_text_:conversion in 4644) [ClassicSimilarity], result of:
          0.11112151 = score(doc=4644,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.46856573 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.018730428 = product of:
          0.037460856 = sum of:
            0.037460856 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.037460856 = score(doc=4644,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Date
    29. 7.2011 14:44:56
  8. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.05
    0.04827395 = product of:
      0.12068488 = sum of:
        0.109981775 = weight(_text_:conversion in 4639) [ClassicSimilarity], result of:
          0.109981775 = score(doc=4639,freq=6.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.4637598 = fieldWeight in 4639, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.010703102 = product of:
          0.021406204 = sum of:
            0.021406204 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.021406204 = score(doc=4639,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  9. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.04
    0.037100557 = product of:
      0.09275139 = sum of:
        0.07937251 = weight(_text_:conversion in 4705) [ClassicSimilarity], result of:
          0.07937251 = score(doc=4705,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.3346898 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
        0.013378878 = product of:
          0.026757756 = sum of:
            0.026757756 = weight(_text_:29 in 4705) [ClassicSimilarity], result of:
              0.026757756 = score(doc=4705,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.19432661 = fieldWeight in 4705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.
    Date
    29. 7.2011 14:44:56
  10. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.02
    0.019246811 = product of:
      0.09623405 = sum of:
        0.09623405 = weight(_text_:conversion in 2362) [ClassicSimilarity], result of:
          0.09623405 = score(doc=2362,freq=6.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.40578982 = fieldWeight in 2362, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.2 = coord(1/5)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  11. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.02
    0.017079165 = product of:
      0.08539583 = sum of:
        0.08539583 = sum of:
          0.064182185 = weight(_text_:europe in 1436) [ClassicSimilarity], result of:
            0.064182185 = score(doc=1436,freq=2.0), product of:
              0.23842667 = queryWeight, product of:
                6.091085 = idf(docFreq=271, maxDocs=44218)
                0.039143547 = queryNorm
              0.26919046 = fieldWeight in 1436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.091085 = idf(docFreq=271, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
          0.021213641 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
            0.021213641 = score(doc=1436,freq=2.0), product of:
              0.13707404 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039143547 = queryNorm
              0.15476047 = fieldWeight in 1436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
      0.2 = coord(1/5)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  12. Boer, V. de; Wielemaker, J.; Gent, J. van; Hildebrand, M.; Isaac, A.; Ossenbruggen, J. van; Schreiber, G.: Supporting linked data production for cultural heritage institutes : the Amsterdam Museum case study (2012) 0.02
    0.015874503 = product of:
      0.07937251 = sum of:
        0.07937251 = weight(_text_:conversion in 265) [ClassicSimilarity], result of:
          0.07937251 = score(doc=265,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.3346898 = fieldWeight in 265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0390625 = fieldNorm(doc=265)
      0.2 = coord(1/5)
    
    Abstract
    Within the cultural heritage field, proprietary metadata and vocabularies are being transformed into public Linked Data. These efforts have mostly been at the level of large-scale aggregators such as Europeana where the original data is abstracted to a common format and schema. Although this approach ensures a level of consistency and interoperability, the richness of the original data is lost in the process. In this paper, we present a transparent and interactive methodology for ingesting, converting and linking cultural heritage metadata into Linked Data. The methodology is designed to maintain the richness and detail of the original metadata. We introduce the XMLRDF conversion tool and describe how it is integrated in the ClioPatria semantic web toolkit. The methodology and the tools have been validated by converting the Amsterdam Museum metadata to a Linked Data version. In this way, the Amsterdam Museum became the first 'small' cultural heritage institution with a node in the Linked Data cloud.
  13. Manaf, N.A. Abdul; Bechhofer, S.; Stevens, R.: ¬The current state of SKOS vocabularies on the Web (2012) 0.02
    0.015874503 = product of:
      0.07937251 = sum of:
        0.07937251 = weight(_text_:conversion in 266) [ClassicSimilarity], result of:
          0.07937251 = score(doc=266,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.3346898 = fieldWeight in 266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.0390625 = fieldNorm(doc=266)
      0.2 = coord(1/5)
    
    Abstract
    We present a survey of the current state of Simple Knowledge Organization System (SKOS) vocabularies on the Web. Candidate vocabularies were gathered through collections and web crawling, with 478 identified as complying to a given definition of a SKOS vocabulary. Analyses were then conducted that included investigation of the use of SKOS constructs; the use of SKOS semantic relations and lexical labels; and the structure of vocabularies in terms of the hierarchical and associative relations, branching factors and the depth of the vocabularies. Even though SKOS concepts are considered to be the core of SKOS vocabularies, our findings were that not all SKOS vocabularies published explicitly declared SKOS concepts in the vocabularies. Almost one-third of th SKOS vocabularies collected fall into the category of term lists, with no use of any SKOS semantic relations. As concept labelling is core to SKOS vocabularies, a surprising find is that not all SKOS vocabularies use SKOS lexical labels, whether skos:prefLabel or skos:altLabel, for their concepts. The branching factors and maximum depth of the vocabularies have no direct relationship to the size of the vocabularies. We also observed some common modelling slips found in SKOS vocabularies. The survey is useful when considering, for example, converting artefacts such as OWL ontologies into SKOS, where a definition of typicality of SKOS vocabularies could be used to guide the conversion. Moreover, the survey results can serve to provide a better understanding of the modelling styles of the SKOS vocabularies published on the Web, especially when considering the creation of applications that utilize these vocabularies.
  14. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.01
    0.014916946 = product of:
      0.037292365 = sum of:
        0.018730428 = product of:
          0.037460856 = sum of:
            0.037460856 = weight(_text_:29 in 4792) [ClassicSimilarity], result of:
              0.037460856 = score(doc=4792,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.27205724 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.5 = coord(1/2)
        0.018561937 = product of:
          0.037123874 = sum of:
            0.037123874 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.037123874 = score(doc=4792,freq=2.0), product of:
                0.13707404 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039143547 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    2. 3.2013 12:29:05
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  15. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.0127859535 = product of:
      0.031964883 = sum of:
        0.016054653 = product of:
          0.032109305 = sum of:
            0.032109305 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.032109305 = score(doc=4649,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
        0.01591023 = product of:
          0.03182046 = sum of:
            0.03182046 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.03182046 = score(doc=4649,freq=2.0), product of:
                0.13707404 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039143547 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  16. Stuckenschmidt, H.; Harmelen, F van; Waard, A. de; Scerri, T.; Bhogal, R.; Buel, J. van; Crowlesmith, I.; Fluit, C.; Kampman, A.; Broekstra, J.; Mulligen, E. van: Exploring large document repositories with RDF technology : the DOPE project (2004) 0.01
    0.012699601 = product of:
      0.063498005 = sum of:
        0.063498005 = weight(_text_:conversion in 762) [ClassicSimilarity], result of:
          0.063498005 = score(doc=762,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.26775184 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
      0.2 = coord(1/5)
    
    Abstract
    This thesaurus-based search system uses automatic indexing, RDF-based querying, and concept-based visualization of results to support exploration of large online document repositories. Innovative research institutes rely on the availability of complete and accurate information about new research and development. Information providers such as Elsevier make it their business to provide the required information in a cost-effective way. The Semantic Web will likely contribute significantly to this effort because it facilitates access to an unprecedented quantity of data. The DOPE project (Drug Ontology Project for Elsevier) explores ways to provide access to multiple lifescience information sources through a single interface. With the unremitting growth of scientific information, integrating access to all this information remains an important problem, primarily because the information sources involved are so heterogeneous. Sources might use different syntactic standards (syntactic heterogeneity), organize information in different ways (structural heterogeneity), and even use different terminologies to refer to the same information (semantic heterogeneity). Integrated access hinges on the ability to address these different kinds of heterogeneity. Also, mental models and keywords for accessing data generally diverge between subject areas and communities; hence, many different ontologies have emerged. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. To serve this need, we've developed a thesaurus-based search system that uses automatic indexing, RDF-based querying, and concept-based visualization. We describe here the conversion of an existing proprietary thesaurus to an open standard format, a generic architecture for thesaurus-based information access, an innovative user interface, and results of initial user studies with the resulting DOPE system.
  17. Madalli, D.P.; Chatterjee, U.; Dutta, B.: ¬An analytical approach to building a core ontology for food (2017) 0.01
    0.012699601 = product of:
      0.063498005 = sum of:
        0.063498005 = weight(_text_:conversion in 3362) [ClassicSimilarity], result of:
          0.063498005 = score(doc=3362,freq=2.0), product of:
            0.23715246 = queryWeight, product of:
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.039143547 = queryNorm
            0.26775184 = fieldWeight in 3362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0585327 = idf(docFreq=280, maxDocs=44218)
              0.03125 = fieldNorm(doc=3362)
      0.2 = coord(1/5)
    
    Abstract
    Purpose The purpose of this paper is to demonstrate the construction of a core ontology for food. To construct the core ontology, the authors propose here an approach called, yet another methodology for ontology plus (YAMO+). The goal is to exhibit the construction of a core ontology for a domain, which can be further extended and converted into application ontologies. Design/methodology/approach To motivate the construction of the core ontology for food, the authors have first articulated a set of application scenarios. The idea is that the constructed core ontology can be used to build application-specific ontologies for those scenarios. As part of the developmental approach to core ontology, the authors have proposed a methodology called YAMO+. It is designed following the theory of analytico-synthetic classification. YAMO+ is generic in nature and can be applied to build core ontologies for any domain. Findings Construction of a core ontology needs a thorough understanding of the domain and domain requirements. There are various challenges involved in constructing a core ontology as discussed in this paper. The proposed approach has proven to be sturdy enough to face the challenges that the construction of a core ontology poses. It is observed that core ontology is amenable to conversion to an application ontology. Practical implications The constructed core ontology for domain food can be readily used for developing application ontologies related to food. The proposed methodology YAMO+ can be applied to build core ontologies for any domain. Originality/value As per the knowledge, the proposed approach is the first attempt based on the study of the state of the art literature, in terms of, a formal approach to the design of a core ontology. Also, the constructed core ontology for food is the first one as there is no such ontology available on the web for domain food.
  18. Garshol, L.M.: Living with topic maps and RDF : Topic maps, RDF, DAML, OIL, OWL, TMCL (2003) 0.01
    0.011231883 = product of:
      0.056159414 = sum of:
        0.056159414 = product of:
          0.11231883 = sum of:
            0.11231883 = weight(_text_:europe in 3886) [ClassicSimilarity], result of:
              0.11231883 = score(doc=3886,freq=2.0), product of:
                0.23842667 = queryWeight, product of:
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.039143547 = queryNorm
                0.4710833 = fieldWeight in 3886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This paper is about the relationship between the topic map and RDF standards families. It compares the two technologies and looks at ways to make it easier for users to live in a world where both technologies are used. This is done by looking at how to convert information back and forth between the two technologies, how to convert schema information, and how to do queries across both information representations. Ways to achieve all of these goals are presented. This paper extends and improves on earlier work on the same subject, described in [Garshol01b]. This paper was first published in the proceedings of XML Europe 2003, 5-8 May 2003, organized by IDEAlliance, London, UK.
  19. Haslhofer, B.; Knezevié, P.: ¬The BRICKS digital library infrastructure (2009) 0.01
    0.008022773 = product of:
      0.040113866 = sum of:
        0.040113866 = product of:
          0.08022773 = sum of:
            0.08022773 = weight(_text_:europe in 3384) [ClassicSimilarity], result of:
              0.08022773 = score(doc=3384,freq=2.0), product of:
                0.23842667 = queryWeight, product of:
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.039143547 = queryNorm
                0.33648807 = fieldWeight in 3384, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3384)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Service-oriented architectures, and the wider acceptance of decentralized peer-to-peer architectures enable the transition from integrated, centrally controlled systems to federated and dynamic configurable systems. The benefits for the individual service providers and users are robustness of the system, independence of central authorities and flexibility in the usage of services. This chapter provides details of the European project BRICKS, which aims at enabling integrated access to distributed resources in the Cultural Heritage domain. The target audience is broad and heterogeneous and involves cultural heritage and educational institutions, the research community, industry, and the general public. The project idea is motivated by the fact that the amount of digital information and digitized content is continuously increasing but still much effort has to be expended to discover and access it. The reasons for such a situation are heterogeneous data formats, restricted access, proprietary access interfaces, etc. Typical usage scenarios are integrated queries among several knowledge resource, e.g. to discover all Italian artifacts from the Renaissance in European museums. Another example is to follow the life cycle of historic documents, whose physical copies are distributed all over Europe. A standard method for integrated access is to place all available content and metadata in a central place. Unfortunately, such a solution requires a quite powerful and costly infrastructure if the volume of data is large. Considerations of cost optimization are highly important for Cultural Heritage institutions, especially if they are funded from public money. Therefore, better usage of the existing resources, i.e. a decentralized/P2P approach promises to deliver a significantly less costly system,and does not mean sacrificing too much on the performance side.
  20. Hinkelmann, K.: Ontopia Omnigator : ein Werkzeug zur Einführung in Topic Maps (20xx) 0.01
    0.007492171 = product of:
      0.037460856 = sum of:
        0.037460856 = product of:
          0.07492171 = sum of:
            0.07492171 = weight(_text_:29 in 3162) [ClassicSimilarity], result of:
              0.07492171 = score(doc=3162,freq=2.0), product of:
                0.13769476 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.039143547 = queryNorm
                0.5441145 = fieldWeight in 3162, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3162)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    4. 9.2011 12:29:09

Authors

Years

Languages

  • e 91
  • d 18
  • f 1
  • sp 1
  • More… Less…

Types

  • a 76
  • el 34
  • m 7
  • x 7
  • s 3
  • n 1
  • r 1
  • More… Less…