Search (164 results, page 2 of 9)

  • × language_ss:"e"
  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[2000 TO 2010}
  1. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.01
    0.0076372377 = product of:
      0.022911713 = sum of:
        0.0061828885 = weight(_text_:in in 2654) [ClassicSimilarity], result of:
          0.0061828885 = score(doc=2654,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1041228 = fieldWeight in 2654, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2654)
        0.016728824 = product of:
          0.033457648 = sum of:
            0.033457648 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.033457648 = score(doc=2654,freq=4.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.01
    0.007504981 = product of:
      0.022514943 = sum of:
        0.007728611 = weight(_text_:in in 4607) [ClassicSimilarity], result of:
          0.007728611 = score(doc=4607,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 4607, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.029572664 = score(doc=4607,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Series
    Lecture notes in computer science: Lecture notes in artificial intelligence ; 4604
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  3. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.01
    0.007262686 = product of:
      0.021788057 = sum of:
        0.0075724614 = weight(_text_:in in 4642) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=4642,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 4642, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.014215595 = weight(_text_:und in 4642) [ClassicSimilarity], result of:
          0.014215595 = score(doc=4642,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
      0.33333334 = coord(2/6)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Garshol, L.M.: Metadata? Thesauri? Taxonomies? Topic Maps! : making sense of it all (2005) 0.01
    0.007262686 = product of:
      0.021788057 = sum of:
        0.0075724614 = weight(_text_:in in 4729) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=4729,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 4729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
        0.014215595 = weight(_text_:und in 4729) [ClassicSimilarity], result of:
          0.014215595 = score(doc=4729,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 4729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
      0.33333334 = coord(2/6)
    
    Abstract
    The task of an information architect is to create web sites where users can actually find the information they are looking for. As the ocean of information rises and leaves what we seek ever more deeply buried in what we don't seek, this discipline becomes ever more relevant. Information architecture involves many different aspects of web site creation and organization, but its principal tools are information organization techniques developed in other disciplines. Most of these techniques come from library science, such as thesauri, taxonomies, and faceted classification. Topic maps are a relative newcomer to this area and bring with them the promise of better-organized web sites, compared to what is possible with existing techniques. However, it is not generally understood how topic maps relate to the traditional techniques, and what advantages and disadvantages they have, compared to these techniques. The aim of this paper is to help build a better understanding of these issues.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.01
    0.0063603036 = product of:
      0.01908091 = sum of:
        0.011973113 = weight(_text_:in in 1169) [ClassicSimilarity], result of:
          0.011973113 = score(doc=1169,freq=40.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 1169, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.0071077975 = weight(_text_:und in 1169) [ClassicSimilarity], result of:
          0.0071077975 = score(doc=1169,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.07346288 = fieldWeight in 1169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
      0.33333334 = coord(2/6)
    
    Abstract
    T.1: Today's thesauri are mostly electronic tools, having moved on from the paper-based era when thesaurus standards were first developed. They are built and maintained with the support of software and need to integrate with other software, such as search engines and content management systems. Whereas in the past thesauri were designed for information professionals trained in indexing and searching, today there is a demand for vocabularies that untrained users will find to be intuitive. ISO 25964 makes the transition needed for the world of electronic information management. However, part 1 retains the assumption that human intellect is usually involved in the selection of indexing terms and in the selection of search terms. If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design, even though a thesaurus built for human users may also be applied in situations where computers make the choices. Efficient exchange of data is a vital component of thesaurus management and exploitation. Hence the inclusion in this standard of recommendations for exchange formats and protocols. Adoption of these will facilitate interoperability between thesaurus management systems and the other computer applications, such as indexing and retrieval systems, that will utilize the data. Thesauri are typically used in post-coordinate retrieval systems, but may also be applied to hierarchical directories, pre-coordinate indexes and classification systems. Increasingly, thesaurus applications need to mesh with others, such as automatic categorization schemes, free-text search systems, etc. Part 2 of ISO 25964 describes additional types of structured vocabulary and gives recommendations to enable interoperation of the vocabularies at all stages of the information storage and retrieval process.
    T.2: The ability to identify and locate relevant information among vast collections and other resources is a major and pressing challenge today. Several different types of vocabulary are in use for this purpose. Some of the most widely used vocabularies were designed a hundred years ago and have been evolving steadily. A different generation of vocabularies is now emerging, designed to exploit the electronic media more effectively. A good understanding of the previous generation is still essential for effective access to collections indexed with them. An important object of ISO 25964 as a whole is to support data exchange and other forms of interoperability in circumstances in which more than one structured vocabulary is applied within one retrieval system or network. Sometimes one vocabulary has to be mapped to another, and it is important to understand both the potential and the limitations of such mappings. In other systems, a thesaurus is mapped to a classification scheme, or an ontology to a thesaurus. Comprehensive interoperability needs to cover the whole range of vocabulary types, whether young or old. Concepts in different vocabularies are related only in that they have the same or similar meaning. However, the meaning can be found in a number of different aspects within each particular type of structured vocabulary: - within terms or captions selected in different languages; - in the notation assigned indicating a place within a larger hierarchy; - in the definition, scope notes, history notes and other notes that explain the significance of that concept; and - in explicit relationships to other concepts or entities within the same vocabulary. In order to create mappings from one structured vocabulary to another it is first necessary to understand, within the context of each different type of structured vocabulary, the significance and relative importance of each of the different elements in defining the meaning of that particular concept. ISO 25964-1 describes the key characteristics of thesauri along with additional advice on best practice. ISO 25964-2 focuses on other types of vocabulary and does not attempt to cover all aspects of good practice. It concentrates on those aspects which need to be understood if one of the vocabularies is to work effectively alongside one or more of the others. Recognizing that a new standard cannot be applied to some existing vocabularies, this part of ISO 25964 provides informative description alongside the recommendations, the aim of which is to enable users and system developers to interpret and implement the existing vocabularies effectively. The remainder of ISO 25964-2 deals with the principles and practicalities of establishing mappings between vocabularies.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  6. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.01
    0.0062172813 = product of:
      0.018651843 = sum of:
        0.010359414 = weight(_text_:in in 2362) [ClassicSimilarity], result of:
          0.010359414 = score(doc=2362,freq=22.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17445749 = fieldWeight in 2362, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
        0.00829243 = weight(_text_:und in 2362) [ClassicSimilarity], result of:
          0.00829243 = score(doc=2362,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.085706696 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.33333334 = coord(2/6)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Stuckenschmidt, H.; Harmelen, F. van: Information sharing on the semantic web (2005) 0.01
    0.0060522384 = product of:
      0.018156715 = sum of:
        0.006310384 = weight(_text_:in in 2789) [ClassicSimilarity], result of:
          0.006310384 = score(doc=2789,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 2789, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2789)
        0.01184633 = weight(_text_:und in 2789) [ClassicSimilarity], result of:
          0.01184633 = score(doc=2789,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.12243814 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2789)
      0.33333334 = coord(2/6)
    
    Abstract
    Das wachsende Informationsvolumen im WWW führt paradoxerweise zu einer immer schwierigeren Nutzung, das Finden und Verknüpfen von Informationen in einem unstrukturierten Umfeld wird zur Sisyphosarbeit. Hier versprechen Semantic-Web-Ansätze Abhilfe. Die Autoren beschreiben Technologien, wie eine semantische Integration verteilter Daten durch verteilte Ontologien erreicht werden kann. Diese Techniken sind sowohl für Forscher als auch für Professionals interessant, die z.B. die Integration von Produktdaten aus verteilten Datenbanken im WWW oder von lose miteinander verbunden Anwendungen in verteilten Organisationen implementieren sollen.
  8. OWL Web Ontology Language Test Cases (2004) 0.00
    0.003943022 = product of:
      0.02365813 = sum of:
        0.02365813 = product of:
          0.04731626 = sum of:
            0.04731626 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.04731626 = score(doc=4685,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    14. 8.2011 13:33:22
  9. Fonseca, F.: ¬The double role of ontologies in information science research (2007) 0.00
    0.0028220895 = product of:
      0.016932536 = sum of:
        0.016932536 = weight(_text_:in in 277) [ClassicSimilarity], result of:
          0.016932536 = score(doc=277,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.28515202 = fieldWeight in 277, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
      0.16666667 = coord(1/6)
    
    Abstract
    In philosophy, Ontology is the basic description of things in the world. In information science, an ontology refers to an engineering artifact, constituted by a specific vocabulary used to describe a certain reality. Ontologies have been proposed for validating both conceptual models and conceptual schemas; however, these roles are quite dissimilar. In this article, we show that ontologies can be better understood if we classify the different uses of the term as it appears in the literature. First, we explain Ontology (upper case O) as used in Philosophy. Then, we propose a differentiation between ontologies of information systems and ontologies for information systems. All three concepts have an important role in information science. We clarify the different meanings and uses of Ontology and ontologies through a comparison of research by Wand and Weber and by Guarino in ontology-driven information systems. The contributions of this article are twofold: (a) It provides a better understanding of what ontologies are, and (b) it explains the double role of ontologies in information science research.
  10. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.00
    0.0027641435 = product of:
      0.01658486 = sum of:
        0.01658486 = weight(_text_:und in 4329) [ClassicSimilarity], result of:
          0.01658486 = score(doc=4329,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 4329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
      0.16666667 = coord(1/6)
    
    Content
    Enthält einen Überblick über Modelle, Systeme und Projekte
  11. Hoang, H.H.; Tjoa, A.M: ¬The state of the art of ontology-based query systems : a comparison of existing approaches (2006) 0.00
    0.0026606917 = product of:
      0.01596415 = sum of:
        0.01596415 = weight(_text_:in in 792) [ClassicSimilarity], result of:
          0.01596415 = score(doc=792,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.26884392 = fieldWeight in 792, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=792)
      0.16666667 = coord(1/6)
    
    Abstract
    Based on an in-depth analysis of existing approaches in building ontology-based query systems we discuss and compare the methods, approaches to be used in current query systems using Ontology or the Semantic Web techniques. This paper identifies various relevant research directions in ontology-based querying research. Based on the results of our investigation we summarise the state of the art ontology-based query/search and name areas of further research activities.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Rindflesch, T.C.; Fizsman, M.: The interaction of domain knowledge and linguistic structure in natural language processing : interpreting hypernymic propositions in biomedical text (2003) 0.00
    0.0025762038 = product of:
      0.015457222 = sum of:
        0.015457222 = weight(_text_:in in 2097) [ClassicSimilarity], result of:
          0.015457222 = score(doc=2097,freq=24.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.260307 = fieldWeight in 2097, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2097)
      0.16666667 = coord(1/6)
    
    Abstract
    Interpretation of semantic propositions in free-text documents such as MEDLINE citations would provide valuable support for biomedical applications, and several approaches to semantic interpretation are being pursued in the biomedical informatics community. In this paper, we describe a methodology for interpreting linguistic structures that encode hypernymic propositions, in which a more specific concept is in a taxonomic relationship with a more general concept. In order to effectively process these constructions, we exploit underspecified syntactic analysis and structured domain knowledge from the Unified Medical Language System (UMLS). After introducing the syntactic processing on which our system depends, we focus on the UMLS knowledge that supports interpretation of hypernymic propositions. We first use semantic groups from the Semantic Network to ensure that the two concepts involved are compatible; hierarchical information in the Metathesaurus then determines which concept is more general and which more specific. A preliminary evaluation of a sample based on the semantic group Chemicals and Drugs provides 83% precision. An error analysis was conducted and potential solutions to the problems encountered are presented. The research discussed here serves as a paradigm for investigating the interaction between domain knowledge and linguistic structure in natural language processing, and could also make a contribution to research on automatic processing of discourse structure. Additional implications of the system we present include its integration in advanced semantic interpretation processors for biomedical text and its use for information extraction in specific domains. The approach has the potential to support a range of applications, including information retrieval and ontology engineering.
  13. Hoekstra, R.: BestMap: context-aware SKOS vocabulary mappings in OWL 2 (2009) 0.00
    0.0025503114 = product of:
      0.015301868 = sum of:
        0.015301868 = weight(_text_:in in 1574) [ClassicSimilarity], result of:
          0.015301868 = score(doc=1574,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2576908 = fieldWeight in 1574, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1574)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes an approach to SKOS vocabulary mapping that takes into account the context in which vocabulary terms are used in annotations. The standard vocabulary mapping properties in SKOS only allow for binary mappings between concepts. In the BestMap ontology, annotated resources are the contexts in which annotations coincide and allow for a more fine grained control over when mappings hold. A mapping between two vocabularies is defined as a class that groups descriptions of a resource. We use the OWL 2 features for property chains, disjoint properties, union, intersection and negation together with careful use of equivalence and subsumption to specify these mappings.
  14. RDF/XML Syntax Specification (Revised) : W3C Recommendation 10 February 2004 (2004) 0.00
    0.0025503114 = product of:
      0.015301868 = sum of:
        0.015301868 = weight(_text_:in in 3066) [ClassicSimilarity], result of:
          0.015301868 = score(doc=3066,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2576908 = fieldWeight in 3066, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3066)
      0.16666667 = coord(1/6)
    
    Abstract
    The Resource Description Framework (RDF) is a general-purpose language for representing information in the Web. This document defines an XML syntax for RDF called RDF/XML in terms of Namespaces in XML, the XML Information Set and XML Base. The formal grammar for the syntax is annotated with actions generating triples of the RDF graph as defined in RDF Concepts and Abstract Syntax. The triples are written using the N-Triples RDF graph serializing format which enables more precise recording of the mapping in a machine processable form. The mappings are recorded as tests cases, gathered and published in RDF Test Cases.
  15. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.00
    0.0025241538 = product of:
      0.015144923 = sum of:
        0.015144923 = weight(_text_:in in 2186) [ClassicSimilarity], result of:
          0.015144923 = score(doc=2186,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.25504774 = fieldWeight in 2186, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
      0.16666667 = coord(1/6)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
    Series
    Lecture notes in computer science; vol. 2870
  16. Broughton, V.: Language related problems in the construction of faceted terminologies and their automatic management (2008) 0.00
    0.0024665273 = product of:
      0.014799163 = sum of:
        0.014799163 = weight(_text_:in in 2497) [ClassicSimilarity], result of:
          0.014799163 = score(doc=2497,freq=22.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24922498 = fieldWeight in 2497, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2497)
      0.16666667 = coord(1/6)
    
    Content
    The paper describes current work on the generation of a thesaurus format from the schedules of the Bliss Bibliographic Classification 2nd edition (BC2). The practical problems that occur in moving from a concept based approach to a terminological approach cluster around issues of vocabulary control that are not fully addressed in a systematic structure. These difficulties can be exacerbated within domains in the humanities because large numbers of culture specific terms may need to be accommodated in any thesaurus. The ways in which these problems can be resolved within the context of a semi-automated approach to the thesaurus generation have consequences for the management of classification data in the source vocabulary. The way in which the vocabulary is marked up for the purpose of machine manipulation is described, and some of the implications for editorial policy are discussed and examples given. The value of the classification notation as a language independent representation and mapping tool should not be sacrificed in such an exercise.
    Series
    Advances in knowledge organization; vol.11
    Source
    Culture and identity in knowledge organization: Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada. Ed. by Clément Arsenault and Joseph T. Tennis
  17. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.00
    0.0024530364 = product of:
      0.014718218 = sum of:
        0.014718218 = weight(_text_:in in 1154) [ClassicSimilarity], result of:
          0.014718218 = score(doc=1154,freq=34.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24786183 = fieldWeight in 1154, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
      0.16666667 = coord(1/6)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
    Content
    A dissertation Presented to the Faculties of Roskilde University in Partial Fulfillment of the Requirement for the Degree of Doctor of Philosophy. Vgl. unter: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.117.987 oder http://coitweb.uncc.edu/~ras/RS/Onto-Retrieval.pdf.
  18. Park, O.n.: Opening ontology design : a study of the implications of knowledge organization for ontology design (2008) 0.00
    0.0023611297 = product of:
      0.014166778 = sum of:
        0.014166778 = weight(_text_:in in 2489) [ClassicSimilarity], result of:
          0.014166778 = score(doc=2489,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23857531 = fieldWeight in 2489, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2489)
      0.16666667 = coord(1/6)
    
    Abstract
    It is proposed that sufficient research into ontology design has not been achieved and that this deficiency has led to the insufficiency of ontology in reinforcing its communications frameworks, knowledge sharing and re-use applications. In order to diagnose the problems of ontology research, I first survey the notion of ontology in the context of ontology design, based on a Means-Ends tool provided by a Cognitive Work Analysis. The potential contributions of knowledge organization in library and information sciences that can be used to improve the limitations of ontology research are demonstrated. I propose a context-centered view as an approach for ontology design, and present faceted classification as an appropriate method for structuring ontology. In addition, I also provides a case study of wine ontology in order to demonstrate how knowledge organization approaches in library and information science can improve ontology design.
  19. Pepper, S.; Groenmo, G.O.: Towards a general theory of scope (2002) 0.00
    0.0023517415 = product of:
      0.014110449 = sum of:
        0.014110449 = weight(_text_:in in 539) [ClassicSimilarity], result of:
          0.014110449 = score(doc=539,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2376267 = fieldWeight in 539, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=539)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper is concerned with the issue of scope in topic maps. Topic maps are a form of knowledge representation suitable for solving a number of complex problems in the area of information management, ranging from findability (navigation and querying) to knowledge management and enterprise application integration (EAI). The topic map paradigm has its roots in efforts to understand the essential semantics of back-of-book indexes in order that they might be captured in a form suitable for computer processing. Once understood, the model of a back-of-book index was generalised in order to cover the needs of digital information, and extended to encompass glossaries and thesauri, as well as indexes. The resulting core model, of typed topics, associations, and occurrences, has many similarities with the semantic networks developed by the artificial intelligence community for representing knowledge structures. One key requirement of topic maps from the earliest days was to be able to merge indexes from disparate origins. This requirement accounts for two further concepts that greatly enhance the power of topic maps: subject identity and scope. This paper concentrates on scope, but also includes a brief discussion of the feature known as the topic naming constraint, with which it is closely related. It is based on the authors' experience in creating topic maps (in particular, the Italian Opera Topic Map, and in implementing processing systems for topic maps (in particular, the Ontopia Topic Map Engine and Navigator.
  20. Fensel, D.; Staab, S.; Studer, R.; Harmelen, F. van; Davies, J.: ¬A future perspective : exploiting peer-to-peer and the Semantic Web for knowledge management (2004) 0.00
    0.0023281053 = product of:
      0.013968632 = sum of:
        0.013968632 = weight(_text_:in in 2262) [ClassicSimilarity], result of:
          0.013968632 = score(doc=2262,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23523843 = fieldWeight in 2262, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2262)
      0.16666667 = coord(1/6)
    
    Abstract
    Over the past few years, we have seen a growing interest in the potential of both peer-to-peer (P2P) computing and the use of more formal approaches to knowledge management, involving the development of ontologies. This penultimate chapter discusses possibilities that both approaches may offer for more effective and efficient knowledge management. In particular, we investigate how the two paradigms may be combined. In this chapter, we describe our vision in terms of a set of future steps that need to be taken to bring the results described in earlier chapters to their full potential.

Types

  • a 104
  • el 64
  • m 8
  • n 8
  • s 5
  • x 4
  • p 1
  • More… Less…

Subjects