Search (25 results, page 1 of 2)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. SKOS Simple Knowledge Organization System Primer (2009) 0.04
    0.035095256 = product of:
      0.122833386 = sum of:
        0.02546139 = weight(_text_:subject in 4795) [ClassicSimilarity], result of:
          0.02546139 = score(doc=4795,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 4795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4795)
        0.02018744 = weight(_text_:classification in 4795) [ClassicSimilarity], result of:
          0.02018744 = score(doc=4795,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 4795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4795)
        0.05699712 = product of:
          0.11399424 = sum of:
            0.11399424 = weight(_text_:schemes in 4795) [ClassicSimilarity], result of:
              0.11399424 = score(doc=4795,freq=8.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.7094823 = fieldWeight in 4795, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4795)
          0.5 = coord(1/2)
        0.02018744 = weight(_text_:classification in 4795) [ClassicSimilarity], result of:
          0.02018744 = score(doc=4795,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 4795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4795)
      0.2857143 = coord(4/14)
    
    Abstract
    SKOS (Simple Knowledge Organisation System) provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other types of controlled vocabulary. As an application of the Resource Description Framework (RDF) SKOS allows concepts to be documented, linked and merged with other data, while still being composed, integrated and published on the World Wide Web. This document is an implementors guide for those who would like to represent their concept scheme using SKOS. In basic SKOS, conceptual resources (concepts) can be identified using URIs, labelled with strings in one or more natural languages, documented with various types of notes, semantically related to each other in informal hierarchies and association networks, and aggregated into distinct concept schemes. In advanced SKOS, conceptual resources can be mapped to conceptual resources in other schemes and grouped into labelled or ordered collections. Concept labels can also be related to each other. Finally, the SKOS vocabulary itself can be extended to suit the needs of particular communities of practice.
  2. SKOS Core Guide (2005) 0.03
    0.032913495 = product of:
      0.11519723 = sum of:
        0.02546139 = weight(_text_:subject in 4689) [ClassicSimilarity], result of:
          0.02546139 = score(doc=4689,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 4689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4689)
        0.02018744 = weight(_text_:classification in 4689) [ClassicSimilarity], result of:
          0.02018744 = score(doc=4689,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 4689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4689)
        0.049360957 = product of:
          0.098721914 = sum of:
            0.098721914 = weight(_text_:schemes in 4689) [ClassicSimilarity], result of:
              0.098721914 = score(doc=4689,freq=6.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.6144297 = fieldWeight in 4689, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4689)
          0.5 = coord(1/2)
        0.02018744 = weight(_text_:classification in 4689) [ClassicSimilarity], result of:
          0.02018744 = score(doc=4689,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 4689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4689)
      0.2857143 = coord(4/14)
    
    Abstract
    SKOS Core provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', other types of controlled vocabulary, and also concept schemes embedded in glossaries and terminologies. The SKOS Core Vocabulary is an application of the Resource Description Framework (RDF), that can be used to express a concept scheme as an RDF graph. Using RDF allows data to be linked to and/or merged with other data, enabling data sources to be distributed across the web, but still be meaningfully composed and integrated. This document is a guide using the SKOS Core Vocabulary, for readers who already have a basic understanding of RDF concepts. This edition of the SKOS Core Guide [SKOS Core Guide] is a W3C Public Working Draft. It is the authoritative guide to recommended usage of the SKOS Core Vocabulary at the time of publication.
  3. Giunchiglia, F.; Zaihrayeu, I.; Farazi, F.: Converting classifications into OWL ontologies (2009) 0.03
    0.02992327 = product of:
      0.13964193 = sum of:
        0.045140486 = weight(_text_:classification in 4690) [ClassicSimilarity], result of:
          0.045140486 = score(doc=4690,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.4720747 = fieldWeight in 4690, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4690)
        0.049360957 = product of:
          0.098721914 = sum of:
            0.098721914 = weight(_text_:schemes in 4690) [ClassicSimilarity], result of:
              0.098721914 = score(doc=4690,freq=6.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.6144297 = fieldWeight in 4690, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4690)
          0.5 = coord(1/2)
        0.045140486 = weight(_text_:classification in 4690) [ClassicSimilarity], result of:
          0.045140486 = score(doc=4690,freq=10.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.4720747 = fieldWeight in 4690, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4690)
      0.21428572 = coord(3/14)
    
    Abstract
    Classification schemes, such as the DMoZ web directory, provide a convenient and intuitive way for humans to access classified contents. While being easy to be dealt with for humans, classification schemes remain hard to be reasoned about by automated software agents. Among other things, this hardness is conditioned by the ambiguous na- ture of the natural language used to describe classification categories. In this paper we describe how classification schemes can be converted into OWL ontologies, thus enabling reasoning on them by Semantic Web applications. The proposed solution is based on a two phase approach in which category names are first encoded in a concept language and then, together with the structure of the classification scheme, are converted into an OWL ontology. We demonstrate the practical applicability of our approach by showing how the results of reasoning on these OWL ontologies can help improve the organization and use of web directories.
  4. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.03
    0.02731208 = product of:
      0.12745637 = sum of:
        0.047104023 = weight(_text_:classification in 3061) [ClassicSimilarity], result of:
          0.047104023 = score(doc=3061,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 3061, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
        0.03324832 = product of:
          0.06649664 = sum of:
            0.06649664 = weight(_text_:schemes in 3061) [ClassicSimilarity], result of:
              0.06649664 = score(doc=3061,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.41386467 = fieldWeight in 3061, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3061)
          0.5 = coord(1/2)
        0.047104023 = weight(_text_:classification in 3061) [ClassicSimilarity], result of:
          0.047104023 = score(doc=3061,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 3061, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
      0.21428572 = coord(3/14)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  5. SKOS Simple Knowledge Organization System Reference : W3C Recommendation 18 August 2009 (2009) 0.03
    0.026952809 = product of:
      0.094334826 = sum of:
        0.02546139 = weight(_text_:subject in 4688) [ClassicSimilarity], result of:
          0.02546139 = score(doc=4688,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 4688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4688)
        0.02018744 = weight(_text_:classification in 4688) [ClassicSimilarity], result of:
          0.02018744 = score(doc=4688,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 4688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4688)
        0.02849856 = product of:
          0.05699712 = sum of:
            0.05699712 = weight(_text_:schemes in 4688) [ClassicSimilarity], result of:
              0.05699712 = score(doc=4688,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.35474116 = fieldWeight in 4688, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4688)
          0.5 = coord(1/2)
        0.02018744 = weight(_text_:classification in 4688) [ClassicSimilarity], result of:
          0.02018744 = score(doc=4688,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 4688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4688)
      0.2857143 = coord(4/14)
    
    Abstract
    This document defines the Simple Knowledge Organization System (SKOS), a common data model for sharing and linking knowledge organization systems via the Web. Many knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems, share a similar structure, and are used in similar applications. SKOS captures much of this similarity and makes it explicit, to enable data and technology sharing across diverse applications. The SKOS data model provides a standard, low-cost migration path for porting existing knowledge organization systems to the Semantic Web. SKOS also provides a lightweight, intuitive language for developing and sharing new knowledge organization systems. It may be used on its own, or in combination with formal knowledge representation languages such as the Web Ontology language (OWL). This document is the normative specification of the Simple Knowledge Organization System. It is intended for readers who are involved in the design and implementation of information systems, and who already have a good understanding of Semantic Web technology, especially RDF and OWL. For an informative guide to using SKOS, see the [SKOS-PRIMER].
  6. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.02
    0.023840893 = product of:
      0.08344312 = sum of:
        0.03638099 = weight(_text_:subject in 3062) [ClassicSimilarity], result of:
          0.03638099 = score(doc=3062,freq=12.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.33878064 = fieldWeight in 3062, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.011776006 = weight(_text_:classification in 3062) [ClassicSimilarity], result of:
          0.011776006 = score(doc=3062,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.12315229 = fieldWeight in 3062, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.023510113 = product of:
          0.047020227 = sum of:
            0.047020227 = weight(_text_:schemes in 3062) [ClassicSimilarity], result of:
              0.047020227 = score(doc=3062,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.29264653 = fieldWeight in 3062, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.5 = coord(1/2)
        0.011776006 = weight(_text_:classification in 3062) [ClassicSimilarity], result of:
          0.011776006 = score(doc=3062,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.12315229 = fieldWeight in 3062, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
      0.2857143 = coord(4/14)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  7. Panzer, M.: Towards the "webification" of controlled subject vocabulary : a case study involving the Dewey Decimal Classification (2007) 0.02
    0.023276636 = product of:
      0.1086243 = sum of:
        0.042009152 = weight(_text_:subject in 538) [ClassicSimilarity], result of:
          0.042009152 = score(doc=538,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.3911902 = fieldWeight in 538, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=538)
        0.033307575 = weight(_text_:classification in 538) [ClassicSimilarity], result of:
          0.033307575 = score(doc=538,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.34832728 = fieldWeight in 538, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=538)
        0.033307575 = weight(_text_:classification in 538) [ClassicSimilarity], result of:
          0.033307575 = score(doc=538,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.34832728 = fieldWeight in 538, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=538)
      0.21428572 = coord(3/14)
    
    Abstract
    The presentation will briefly introduce a series of major principles for bringing subject terminology to the network level. A closer look at one KOS in particular, the Dewey Decimal Classification, should help to gain more insight into the perceived difficulties and potential benefits of building taxonomy services out and on top of classic large-scale vocabularies or taxonomies.
  8. Wilson, T.: ¬The strict faceted classification model (2006) 0.02
    0.01665032 = product of:
      0.11655223 = sum of:
        0.058276117 = weight(_text_:classification in 2836) [ClassicSimilarity], result of:
          0.058276117 = score(doc=2836,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.6094458 = fieldWeight in 2836, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.078125 = fieldNorm(doc=2836)
        0.058276117 = weight(_text_:classification in 2836) [ClassicSimilarity], result of:
          0.058276117 = score(doc=2836,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.6094458 = fieldWeight in 2836, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.078125 = fieldNorm(doc=2836)
      0.14285715 = coord(2/14)
    
    Abstract
    Faceted classification, at its core, implies orthogonality - that every facet axis exists at right angles to (i.e., independently of) every other facet axis. That's why a faceted classification is sometimes represented with a chart. This set of desserts has been classified by their confection types and, orthogonally, by their flavors.
  9. Tzitzikas, Y.; Spyratos, N.; Constantopoulos, P.; Analyti, A.: Extended faceted ontologies (2002) 0.01
    0.014758593 = product of:
      0.068873435 = sum of:
        0.02018744 = weight(_text_:classification in 2280) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2280,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2280)
        0.02849856 = product of:
          0.05699712 = sum of:
            0.05699712 = weight(_text_:schemes in 2280) [ClassicSimilarity], result of:
              0.05699712 = score(doc=2280,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.35474116 = fieldWeight in 2280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2280)
          0.5 = coord(1/2)
        0.02018744 = weight(_text_:classification in 2280) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2280,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2280)
      0.21428572 = coord(3/14)
    
    Abstract
    A faceted ontology consists of a set of facets, where each facet consists of a predefined set of terms structured by a subsumption relation. We propose two extensions of faceted ontologies, which allow inferring conjunctions of terms that are valid in the underlying domain. We give a model-theoretic interpretation to these extended faceted ontologies and we provide mechanisms for inferring the valid conjunctions of terms. This inference service can be exploited for preventing errors during the indexing process and for deriving navigation trees that are suitable for browsing. The proposed scheme has several advantages by comparison to the hierarchical classification schemes that are currently used, namely: conceptual clarity: it is easier to understand, compactness: it takes less space, and scalability: the update operations can be formulated easier and be performed more efficiently.
  10. Prieto-Díaz, R.: ¬A faceted approach to building ontologies (2002) 0.01
    0.014107771 = product of:
      0.065836266 = sum of:
        0.02546139 = weight(_text_:subject in 2259) [ClassicSimilarity], result of:
          0.02546139 = score(doc=2259,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 2259, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2259)
        0.02018744 = weight(_text_:classification in 2259) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2259,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2259, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2259)
        0.02018744 = weight(_text_:classification in 2259) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2259,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2259, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2259)
      0.21428572 = coord(3/14)
    
    Abstract
    An ontology is "an explicit conceptualization of a domain of discourse, and thus provides a shared and common understanding of the domain." We have been producing ontologies for millennia to understand and explain our rationale and environment. From Plato's philosophical framework to modern day classification systems, ontologies are, in most cases, the product of extensive analysis and categorization. Only recently has the process of building ontologies become a research topic of interest. Today, ontologies are built very much ad-hoc. A terminology is first developed providing a controlled vocabulary for the subject area or domain of interest, then it is organized into a taxonomy where key concepts are identified, and finally these concepts are defined and related to create an ontology. The intent of this paper is to show that domain analysis methods can be used for building ontologies. Domain analysis aims at generic models that represent groups of similar systems within an application domain. In this sense, it deals with categorization of common objects and operations, with clear, unambiguous definitions of them and with defining their relationships.
  11. Zeng, M.L.; Zumer, M.: Introducing FRSAD and mapping it with SKOS and other models (2009) 0.01
    0.012980853 = product of:
      0.09086597 = sum of:
        0.062367413 = weight(_text_:subject in 3150) [ClassicSimilarity], result of:
          0.062367413 = score(doc=3150,freq=12.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.5807668 = fieldWeight in 3150, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=3150)
        0.02849856 = product of:
          0.05699712 = sum of:
            0.05699712 = weight(_text_:schemes in 3150) [ClassicSimilarity], result of:
              0.05699712 = score(doc=3150,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.35474116 = fieldWeight in 3150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3150)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The Functional Requirements for Subject Authority Records (FRSAR) Working Group was formed in 2005 as the third IFLA working group of the FRBR family to address subject authority data issues and to investigate the direct and indirect uses of subject authority data by a wide range of users. This paper introduces the Functional Requirements for Subject Authority Data (FRSAD), the model developed by the FRSAR Working Group, and discusses it in the context of other related conceptual models defined in the specifications during recent years, including the British Standard BS8723-5: Structured vocabularies for information retrieval - Guide Part 5: Exchange formats and protocols for interoperability, W3C's SKOS Simple Knowledge Organization System Reference, and OWL Web Ontology Language Reference. These models enable the consideration of the functions of subject authority data and concept schemes at a higher level that is independent of any implementation, system, or specific context, while allowing us to focus on the semantics, structures, and interoperability of subject authority data.
  12. Wang, Y.-H.; Jhuo, P.-S.: ¬A semantic faceted search with rule-based inference (2009) 0.01
    0.008156957 = product of:
      0.057098698 = sum of:
        0.028549349 = weight(_text_:classification in 540) [ClassicSimilarity], result of:
          0.028549349 = score(doc=540,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.29856625 = fieldWeight in 540, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=540)
        0.028549349 = weight(_text_:classification in 540) [ClassicSimilarity], result of:
          0.028549349 = score(doc=540,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.29856625 = fieldWeight in 540, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=540)
      0.14285715 = coord(2/14)
    
    Abstract
    Semantic Search has become an active research of Semantic Web in recent years. The classification methodology plays a pretty critical role in the beginning of search process to disambiguate irrelevant information. However, the applications related to Folksonomy suffer from many obstacles. This study attempts to eliminate the problems resulted from Folksonomy using existing semantic technology. We also focus on how to effectively integrate heterogeneous ontologies over the Internet to acquire the integrity of domain knowledge. A faceted logic layer is abstracted in order to strengthen category framework and organize existing available ontologies according to a series of steps based on the methodology of faceted classification and ontology construction. The result showed that our approach can facilitate the integration of inconsistent or even heterogeneous ontologies. This paper also generalizes the principles of picking appropriate facets with which our facet browser completely complies so that better semantic search result can be obtained.
  13. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.00
    0.0047582253 = product of:
      0.033307575 = sum of:
        0.016653787 = weight(_text_:classification in 517) [ClassicSimilarity], result of:
          0.016653787 = score(doc=517,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17416364 = fieldWeight in 517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
        0.016653787 = weight(_text_:classification in 517) [ClassicSimilarity], result of:
          0.016653787 = score(doc=517,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17416364 = fieldWeight in 517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
      0.14285715 = coord(2/14)
    
    Content
    This type of effort is common in the digital library community, where a group of experts will interact with a user community to create a thesaurus for a specific domain (e.g. the Art & Architecture Thesaurus AAT AAT) or an overarching classification scheme (e.g. the Dewey Decimal Classification). A similar type of activity is being undertaken more recently in a less centralised manner by web communities, producing for example the DMOZ web directory DMOZ, or the Topic Exchange for weblog topics Topic Exchange. The web, including the semantic web, provides a medium within which communities can interact and collaboratively build and use vocabularies of concepts. A simple language is required that allows these communities to express the structure and content of their vocabularies in a machine-understandable way, enabling exchange and reuse. The Resource Description Framework (RDF) is an ideal language for making statements about web resources and publishing metadata. However, RDF provides only the low level semantics required to form metadata statements. RDF vocabularies must be built on top of RDF to support the expression of more specific types of information within metadata. Ontology languages such as OWL OWL add a layer of expressive power to RDF, and provide powerful tools for defining complex conceptual structures, which can be used to generate rich metadata. However, the class-oriented, logically precise modelling required to construct useful web ontologies is demanding in terms of expertise, effort, and therefore cost. In many cases this type of modelling may be superfluous or unsuited to requirements. Therefore there is a need for a language for expressing vocabularies of concepts for use in semantically rich metadata, that is powerful enough to support semantically enhanced search, but simple enough to be undemanding in terms of the cost and expertise required to use it."
  14. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.00
    0.0034293185 = product of:
      0.048010457 = sum of:
        0.048010457 = weight(_text_:subject in 537) [ClassicSimilarity], result of:
          0.048010457 = score(doc=537,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.4470745 = fieldWeight in 537, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=537)
      0.071428575 = coord(1/14)
    
    Abstract
    The presentation will examine the potential of facet analysis as a basis for determining status and relationships of concepts in subject based tools using a controlled vocabulary, and the extent to which it can be used as a general theory of knowledge organization as opposed to a methodology for structuring classifications only.
  15. Garshol, L.M.: Living with topic maps and RDF : Topic maps, RDF, DAML, OIL, OWL, TMCL (2003) 0.00
    0.0021217826 = product of:
      0.029704956 = sum of:
        0.029704956 = weight(_text_:subject in 3886) [ClassicSimilarity], result of:
          0.029704956 = score(doc=3886,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 3886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3886)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper is about the relationship between the topic map and RDF standards families. It compares the two technologies and looks at ways to make it easier for users to live in a world where both technologies are used. This is done by looking at how to convert information back and forth between the two technologies, how to convert schema information, and how to do queries across both information representations. Ways to achieve all of these goals are presented. This paper extends and improves on earlier work on the same subject, described in [Garshol01b]. This paper was first published in the proceedings of XML Europe 2003, 5-8 May 2003, organized by IDEAlliance, London, UK.
  16. Pepper, S.; Groenmo, G.O.: Towards a general theory of scope (2002) 0.00
    0.0015155592 = product of:
      0.021217827 = sum of:
        0.021217827 = weight(_text_:subject in 539) [ClassicSimilarity], result of:
          0.021217827 = score(doc=539,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=539)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper is concerned with the issue of scope in topic maps. Topic maps are a form of knowledge representation suitable for solving a number of complex problems in the area of information management, ranging from findability (navigation and querying) to knowledge management and enterprise application integration (EAI). The topic map paradigm has its roots in efforts to understand the essential semantics of back-of-book indexes in order that they might be captured in a form suitable for computer processing. Once understood, the model of a back-of-book index was generalised in order to cover the needs of digital information, and extended to encompass glossaries and thesauri, as well as indexes. The resulting core model, of typed topics, associations, and occurrences, has many similarities with the semantic networks developed by the artificial intelligence community for representing knowledge structures. One key requirement of topic maps from the earliest days was to be able to merge indexes from disparate origins. This requirement accounts for two further concepts that greatly enhance the power of topic maps: subject identity and scope. This paper concentrates on scope, but also includes a brief discussion of the feature known as the topic naming constraint, with which it is closely related. It is based on the authors' experience in creating topic maps (in particular, the Italian Opera Topic Map, and in implementing processing systems for topic maps (in particular, the Ontopia Topic Map Engine and Navigator.
  17. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.00
    0.0014528577 = product of:
      0.020340007 = sum of:
        0.020340007 = product of:
          0.040680014 = sum of:
            0.040680014 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.040680014 = score(doc=539,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    26.12.2011 13:22:07
  18. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.00
    0.0014243455 = product of:
      0.019940836 = sum of:
        0.019940836 = product of:
          0.039881673 = sum of:
            0.039881673 = weight(_text_:texts in 1154) [ClassicSimilarity], result of:
              0.039881673 = score(doc=1154,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2422848 = fieldWeight in 1154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1154)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
  19. Urs, S.R.; Angrosh, M.A.: Ontology-based knowledge organization systems in digital libraries : a comparison of experiments in OWL and KAON ontologies (2006 (?)) 0.00
    0.0012124473 = product of:
      0.016974261 = sum of:
        0.016974261 = weight(_text_:subject in 2799) [ClassicSimilarity], result of:
          0.016974261 = score(doc=2799,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.15806471 = fieldWeight in 2799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=2799)
      0.071428575 = coord(1/14)
    
    Abstract
    Grounded on a strong belief that ontologies enhance the performance of information retrieval systems, there has been an upsurge of interest in ontologies. Its importance is identified in diverse research fields such as knowledge engineering, knowledge representation, qualitative modeling, language engineering, database design, information integration, object-oriented analysis, information retrieval and extraction, knowledge management and agent-based systems design (Guarino, 1998). While the role-played by ontologies, automatically lends a place of legitimacy for these tools, research in this area gains greater significance in the wake of various challenges faced in the contemporary digital environment. With the objective of overcoming various pitfalls associated with current search mechanisms, ontologies are increasingly used for developing efficient information retrieval systems. An indicator of research interest in the area of ontology is the Swoogle, a search engine for Semantic Web documents, terms and data found on the Web (Ding, Li et al, 2004). Given the complex nature of the digital content archived in digital libraries, ontologies can be employed for designing efficient forms of information retrieval in digital libraries. Knowledge representation assumes greater significance due to its crucial role in ontology development. These systems aid in developing intelligent information systems, wherein the notion of intelligence implies the ability of the system to find implicit consequences of its explicitly represented knowledge (Baader and Nutt, 2003). Knowledge representation formalisms such as 'Description Logics' are used to obtain explicit knowledge representation of the subject domain. These representations are developed into ontologies, which are used for developing intelligent information systems. Against this backdrop, the paper examines the use of Description Logics for conceptually modeling a chosen domain, which would be utilized for developing domain ontologies. The knowledge representation languages identified for this purpose are Web Ontology Language (OWL) and KArlsruhe ONtology (KAON) language. Drawing upon the various technical constructs in developing ontology-based information systems, the paper explains the working of the prototypes and also presents a comparative study of the two prototypes.
  20. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.00
    0.0012124473 = product of:
      0.016974261 = sum of:
        0.016974261 = weight(_text_:subject in 758) [ClassicSimilarity], result of:
          0.016974261 = score(doc=758,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.15806471 = fieldWeight in 758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
      0.071428575 = coord(1/14)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.