Search (110 results, page 6 of 6)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  1. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.00
    3.5800604E-4 = product of:
      0.0050120843 = sum of:
        0.0050120843 = weight(_text_:information in 761) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=761,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
      0.071428575 = coord(1/14)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  2. Krötzsch, M.; Hitzler, P.; Ehrig, M.; Sure, Y.: Category theory in ontology research : concrete gain from an abstract approach (2004 (?)) 0.00
    3.5800604E-4 = product of:
      0.0050120843 = sum of:
        0.0050120843 = weight(_text_:information in 4538) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=4538,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 4538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4538)
      0.071428575 = coord(1/14)
    
    Abstract
    The focus of research on representing and reasoning with knowledge traditionally has been on single specifications and appropriate inference paradigms to draw conclusions from such data. Accordingly, this is also an essential aspect of ontology research which has received much attention in recent years. But ontologies introduce another new challenge based on the distributed nature of most of their applications, which requires to relate heterogeneous ontological specifications and to integrate information from multiple sources. These problems have of course been recognized, but many current approaches still lack the deep formal backgrounds on which todays reasoning paradigms are already founded. Here we propose category theory as a well-explored and very extensive mathematical foundation for modelling distributed knowledge. A particular prospect is to derive conclusions from the structure of those distributed knowledge bases, as it is for example needed when merging ontologies
  3. Gayathri, R.; Uma, V.: Ontology based knowledge representation technique, domain modeling languages and planners for robotic path planning : a survey (2018) 0.00
    3.5800604E-4 = product of:
      0.0050120843 = sum of:
        0.0050120843 = weight(_text_:information in 5605) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=5605,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 5605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5605)
      0.071428575 = coord(1/14)
    
    Abstract
    Knowledge Representation and Reasoning (KR & R) has become one of the promising fields of Artificial Intelligence. KR is dedicated towards representing information about the domain that can be utilized in path planning. Ontology based knowledge representation and reasoning techniques provide sophisticated knowledge about the environment for processing tasks or methods. Ontology helps in representing the knowledge about environment, events and actions that help in path planning and making robots more autonomous. Knowledge reasoning techniques can infer new conclusion and thus aids planning dynamically in a non-deterministic environment. In the initial sections, the representation of knowledge using ontology and the techniques for reasoning that could contribute in path planning are discussed in detail. In the following section, we also provide comparison of various planning domain modeling languages, ontology editors, planners and robot simulation tools.
  4. Weller, K.: Ontologien: Stand und Entwicklung der Semantik für WorldWideWeb (2009) 0.00
    2.9833836E-4 = product of:
      0.004176737 = sum of:
        0.004176737 = weight(_text_:information in 4425) [ClassicSimilarity], result of:
          0.004176737 = score(doc=4425,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 4425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4425)
      0.071428575 = coord(1/14)
    
    Abstract
    Die Idee zu einem semantischen Web wurde maßgeblich geprägt (wenn auch nicht initiiert) durch eine Veröffentlichung von Tim Berners Lee, James Hendler und Ora Lassila im Jahre 2001. Darin skizzieren die Autoren ihre Version von einem erweiterten und verbesserten World Wide Web: Daten sollen so aufbereitet werden, dass nicht nur Menschen diese lesen können, sondern dass auch Computer in die Lage versetzt werden, diese zu verarbeiten und sinnvoll zu kombinieren. Sie beschreiben ein Szenario, in dem "Web agents" dem Nutzer bei der Durchführung komplexer Suchanfragen helfen, wie beispielsweise "finde einen Arzt, der eine bestimmte Behandlung anbietet, dessen Praxis in der Nähe meiner Wohnung liegt und dessen Öffnungszeiten mit meinem Terminkalender zusammenpassen". Die große Herausforderung liegt hierbei darin, dass Informationen, die über mehrere Webseiten verteilt sind, gesammelt und zu einer sinnvollen Antwort kombiniert werden müssen. Man spricht dabei vom Problem der Informationsintegration (Information Integration). Diese Vision der weltweiten Datenintegration in einem Semantic Web wurde seither vielfach diskutiert, erweitert und modifiziert, an der technischen Realisation arbeitet eine Vielzahl verschiedener Forschungseinrichtungen. Einigkeit besteht dahingehend, dass eine solche Idee nur mit der Hilfe neuer bedeutungstragender Metadaten verwirklicht werden kann. Benötigt werden also neue Ansätze zur Indexierung von Web Inhalten, die eine Suche über Wortbedeutungen und nicht über bloße Zeichenketten ermöglichen können. So soll z.B. erkannt werden, dass es sich bei "Heinrich Heine" um den Namen einer Person handelt und bei "Düsseldorf" um den Namen einer Stadt. Darüber hinaus sollen auch Verbindungen zwischen einzelnen Informationseinheiten festgehalten werden, beispielsweise dass Heinrich Heine in Düsseldorf wohnte. Wenn solche semantischen Relationen konsequent eingesetzt werden, können sie in vielen Fällen ausgenutzt werden, um neue Schlussfolgerungen zu ziehen.
  5. OWL Web Ontology Language Guide (2004) 0.00
    2.9833836E-4 = product of:
      0.004176737 = sum of:
        0.004176737 = weight(_text_:information in 4687) [ClassicSimilarity], result of:
          0.004176737 = score(doc=4687,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 4687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4687)
      0.071428575 = coord(1/14)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
  6. Cregan, A.: ¬An OWL DL construction for the ISO Topic Map Data Model (2005) 0.00
    2.9833836E-4 = product of:
      0.004176737 = sum of:
        0.004176737 = weight(_text_:information in 4718) [ClassicSimilarity], result of:
          0.004176737 = score(doc=4718,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 4718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4718)
      0.071428575 = coord(1/14)
    
    Abstract
    Both Topic Maps and the W3C Semantic Web technologies are meta-level semantic maps describing relationships between information resources. Previous attempts at interoperability between XTM Topic Maps and RDF have proved problematic. The ISO's drafting of an explicit Topic Map Data Model [TMDM 05] combined with the advent of the W3C's XML and RDFbased Description Logic-equivalent Web Ontology Language [OWLDL 04] now provides the means for the construction of an unambiguous semantic model to represent Topic Maps, in a form that is equivalent to a Description Logic representation. This paper describes the construction of the proposed TMDM ISO Topic Map Standard in OWL DL (Description Logic equivalent) form. The construction is claimed to exactly match the features of the proposed TMDM. The intention is that the topic map constructs described herein, once officially published on the world-wide web, may be used by Topic Map authors to construct their Topic Maps in OWL DL. The advantage of OWL DL Topic Map construction over XTM, the existing XML-based DTD standard, is that OWL DL allows many constraints to be explicitly stated. OWL DL's suite of tools, although currently still somewhat immature, will provide the means for both querying and enforcing constraints. This goes a long way towards fulfilling the requirements for a Topic Map Query Language (TMQL) and Constraint Language (TMCL), which the Topic Map Community may choose to expend effort on extending. Additionally, OWL DL has a clearly defined formal semantics (Description Logic ref)
  7. Frické, M.: Logical division (2016) 0.00
    2.9833836E-4 = product of:
      0.004176737 = sum of:
        0.004176737 = weight(_text_:information in 3183) [ClassicSimilarity], result of:
          0.004176737 = score(doc=3183,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 3183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3183)
      0.071428575 = coord(1/14)
    
    Abstract
    Division is obviously important to Knowledge Organization. Typically, an organizational infrastructure might acknowledge three types of connecting relationships: class hierarchies, where some classes are subclasses of others, partitive hierarchies, where some items are parts of others, and instantiation, where some items are members of some classes (see Z39.19 ANSI/NISO 2005 as an example). The first two of these involve division (the third, instantiation, does not involve division). Logical division would usually be a part of hierarchical classification systems, which, in turn, are central to shelving in libraries, to subject classification schemes, to controlled vocabularies, and to thesauri. Partitive hierarchies, and partitive division, are often essential to controlled vocabularies, thesauri, and subject tagging systems. Partitive hierarchies also relate to the bearers of information; for example, a journal would typically have its component articles as parts and, in turn, they might have sections as their parts, and, of course, components might be arrived at by partitive division (see Tillett 2009 as an illustration). Finally, verbal division, disambiguating homographs, is basic to controlled vocabularies. Thus Division is a broad and relevant topic. This article, though, is going to focus on Logical Division.
  8. Veltman, K.H.: Towards a Semantic Web for culture 0.00
    2.3867069E-4 = product of:
      0.0033413896 = sum of:
        0.0033413896 = weight(_text_:information in 4040) [ClassicSimilarity], result of:
          0.0033413896 = score(doc=4040,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.0775819 = fieldWeight in 4040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4040)
      0.071428575 = coord(1/14)
    
    Source
    Journal of digital information. 4(2004), no.4
  9. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.00
    2.3867069E-4 = product of:
      0.0033413896 = sum of:
        0.0033413896 = weight(_text_:information in 4639) [ClassicSimilarity], result of:
          0.0033413896 = score(doc=4639,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.0775819 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
      0.071428575 = coord(1/14)
    
    Abstract
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
  10. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.00
    2.3867069E-4 = product of:
      0.0033413896 = sum of:
        0.0033413896 = weight(_text_:information in 4796) [ClassicSimilarity], result of:
          0.0033413896 = score(doc=4796,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.0775819 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
      0.071428575 = coord(1/14)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].

Years

Languages

  • e 95
  • d 14

Types

  • a 47
  • n 6
  • x 4
  • r 3
  • p 2
  • s 1
  • More… Less…