Search (76 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.12
    0.12421139 = product of:
      0.24842279 = sum of:
        0.062105697 = product of:
          0.18631709 = sum of:
            0.18631709 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18631709 = score(doc=400,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18631709 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18631709 = score(doc=400,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.11
    0.108532615 = product of:
      0.21706523 = sum of:
        0.0414038 = product of:
          0.12421139 = sum of:
            0.12421139 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12421139 = score(doc=5820,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17566143 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17566143 = score(doc=5820,freq=4.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.08
    0.0828076 = product of:
      0.1656152 = sum of:
        0.0414038 = product of:
          0.12421139 = sum of:
            0.12421139 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12421139 = score(doc=701,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12421139 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12421139 = score(doc=701,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.07
    0.07195565 = product of:
      0.1439113 = sum of:
        0.1333155 = weight(_text_:logic in 2623) [ClassicSimilarity], result of:
          0.1333155 = score(doc=2623,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.56535566 = fieldWeight in 2623, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.010595793 = product of:
          0.031787377 = sum of:
            0.031787377 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.031787377 = score(doc=2623,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  5. Priss, U.: Description logic and faceted knowledge representation (1999) 0.07
    0.07195565 = product of:
      0.1439113 = sum of:
        0.1333155 = weight(_text_:logic in 2655) [ClassicSimilarity], result of:
          0.1333155 = score(doc=2655,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.56535566 = fieldWeight in 2655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.010595793 = product of:
          0.031787377 = sum of:
            0.031787377 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.031787377 = score(doc=2655,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  6. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.05
    0.052697577 = product of:
      0.2107903 = sum of:
        0.2107903 = weight(_text_:logic in 761) [ClassicSimilarity], result of:
          0.2107903 = score(doc=761,freq=10.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.89390576 = fieldWeight in 761, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  7. Miller, R.: Three problems in logic-based knowledge representation (2006) 0.05
    0.047134142 = product of:
      0.18853657 = sum of:
        0.18853657 = weight(_text_:logic in 660) [ClassicSimilarity], result of:
          0.18853657 = score(doc=660,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.7995336 = fieldWeight in 660, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=660)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this article is to give a non-technical overview of some of the technical progress made recently on tackling three fundamental problems in the area of formal knowledge representation/artificial intelligence. These are the Frame Problem, the Ramification Problem, and the Qualification Problem. The article aims to describe the development of two logic-based languages, the Event Calculus and Modular-E, to address various aspects of these issues. The article also aims to set this work in the wider context of contemporary developments in applied logic, non-monotonic reasoning and formal theories of common sense. Design/methodology/approach - The study applies symbolic logic to model aspects of human knowledge and reasoning. Findings - The article finds that there are fundamental interdependencies between the three problems mentioned above. The conceptual framework shared by the Event Calculus and Modular-E is appropriate for providing principled solutions to them. Originality/value - This article provides an overview of an important approach to dealing with three fundamental issues in artificial intelligence.
  8. Cregan, A.: ¬An OWL DL construction for the ISO Topic Map Data Model (2005) 0.04
    0.03927845 = product of:
      0.1571138 = sum of:
        0.1571138 = weight(_text_:logic in 4718) [ClassicSimilarity], result of:
          0.1571138 = score(doc=4718,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.666278 = fieldWeight in 4718, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4718)
      0.25 = coord(1/4)
    
    Abstract
    Both Topic Maps and the W3C Semantic Web technologies are meta-level semantic maps describing relationships between information resources. Previous attempts at interoperability between XTM Topic Maps and RDF have proved problematic. The ISO's drafting of an explicit Topic Map Data Model [TMDM 05] combined with the advent of the W3C's XML and RDFbased Description Logic-equivalent Web Ontology Language [OWLDL 04] now provides the means for the construction of an unambiguous semantic model to represent Topic Maps, in a form that is equivalent to a Description Logic representation. This paper describes the construction of the proposed TMDM ISO Topic Map Standard in OWL DL (Description Logic equivalent) form. The construction is claimed to exactly match the features of the proposed TMDM. The intention is that the topic map constructs described herein, once officially published on the world-wide web, may be used by Topic Map authors to construct their Topic Maps in OWL DL. The advantage of OWL DL Topic Map construction over XTM, the existing XML-based DTD standard, is that OWL DL allows many constraints to be explicitly stated. OWL DL's suite of tools, although currently still somewhat immature, will provide the means for both querying and enforcing constraints. This goes a long way towards fulfilling the requirements for a Topic Map Query Language (TMQL) and Constraint Language (TMCL), which the Topic Map Community may choose to expend effort on extending. Additionally, OWL DL has a clearly defined formal semantics (Description Logic ref)
  9. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.04
    0.03927845 = product of:
      0.1571138 = sum of:
        0.1571138 = weight(_text_:logic in 3934) [ClassicSimilarity], result of:
          0.1571138 = score(doc=3934,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.666278 = fieldWeight in 3934, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
      0.25 = coord(1/4)
    
    LCSH
    Mathematical logic
    Mathematical Logic and Formal Languages
    Subject
    Mathematical logic
    Mathematical Logic and Formal Languages
  10. Balakrishnan, U,; Soergel, D.; Helfer, O.: Representing concepts through description logic expressions for knowledge organization system (KOS) mapping (2020) 0.04
    0.03927845 = product of:
      0.1571138 = sum of:
        0.1571138 = weight(_text_:logic in 144) [ClassicSimilarity], result of:
          0.1571138 = score(doc=144,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.666278 = fieldWeight in 144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.078125 = fieldNorm(doc=144)
      0.25 = coord(1/4)
    
  11. Andreas, H.: On frames and theory-elements of structuralism (2014) 0.03
    0.03401614 = product of:
      0.13606456 = sum of:
        0.13606456 = weight(_text_:logic in 3402) [ClassicSimilarity], result of:
          0.13606456 = score(doc=3402,freq=6.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.57701373 = fieldWeight in 3402, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3402)
      0.25 = coord(1/4)
    
    Abstract
    There are quite a few success stories illustrating philosophy's relevance to information science. One can cite, for example, Leibniz's work on a characteristica universalis and a corresponding calculus ratiocinator through which he aspired to reduce reasoning to calculating. It goes without saying that formal logic initiated research on decidability and computational complexity. But even beyond the realm of formal logic, philosophy has served as a source of inspiration for developments in information and computer science. At the end of the twentieth century, formal ontology emerged from a quest for a semantic foundation of information systems having a higher reusability than systems being available at the time. A success story that is less well documented is the advent of frame systems in computer science. Minsky is credited with having laid out the foundational ideas of such systems. There, the logic programming approach to knowledge representation is criticized by arguing that one should be more careful about the way human beings recognize objects and situations. Notably, the paper draws heavily on the writings of Kuhn and the Gestalt-theorists. It is not our intent, however, to document the traces of the frame idea in the works of philosophers. What follows is, rather, an exposition of a methodology for representing scientific knowledge that is essentially frame-like. This methodology is labelled as structuralist theory of science or, in short, as structuralism. The frame-like character of its basic meta-theoretical concepts makes structuralism likely to be useful in knowledge representation.
  12. Thomer, A.; Cheng, Y.-Y.; Schneider, J.; Twidale, M.; Ludäscher, B.: Logic-based schema alignment for natural history Mmuseum databases (2017) 0.03
    0.03401614 = product of:
      0.13606456 = sum of:
        0.13606456 = weight(_text_:logic in 4131) [ClassicSimilarity], result of:
          0.13606456 = score(doc=4131,freq=6.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.57701373 = fieldWeight in 4131, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4131)
      0.25 = coord(1/4)
    
    Abstract
    In natural history museums, knowledge organization systems have gradually been migrated from paper-based catalog ledgers to electronic databases; these databases in turn must be migrated from one platform or software version to another. These migrations are by no means straightforward, particularly when one data schema must be mapped to another-or, when a database has been used in other-than-its-intended manner. There are few tools or methods available to support the necessary work of comparing divergent data schemas. Here we present a proof-of-concept in which we compare two versions of a subset of the Specify 6 data model using Euler/X, a logic-based reasoning tool. Specify 6 is a popular natural history museum database system whose data model has undergone several changes over its lifespan. We use Euler/X to produce visualizations (called "possible worlds") of the different ways that two versions of this data model might be mapped to one another. This proof-of-concept lays groundwork for further approaches that could aid data curators in database migration and maintenance work. It also contributes to research on the unique challenges to knowledge organization within natural history museums, and on the applicability of logic-based approaches to database schema migration or crosswalking.
  13. Frické, M.: Logic and the organization of information (2012) 0.03
    0.033674262 = product of:
      0.13469705 = sum of:
        0.13469705 = weight(_text_:logic in 1782) [ClassicSimilarity], result of:
          0.13469705 = score(doc=1782,freq=12.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.57121444 = fieldWeight in 1782, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.25 = coord(1/4)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
  14. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.03
    0.030740254 = product of:
      0.122961015 = sum of:
        0.122961015 = weight(_text_:logic in 2362) [ClassicSimilarity], result of:
          0.122961015 = score(doc=2362,freq=10.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.52144504 = fieldWeight in 2362, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.25 = coord(1/4)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  15. Mainzer, K.: ¬The emergence of self-conscious systems : from symbolic AI to embodied robotics (2014) 0.03
    0.027774062 = product of:
      0.11109625 = sum of:
        0.11109625 = weight(_text_:logic in 3398) [ClassicSimilarity], result of:
          0.11109625 = score(doc=3398,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.47112972 = fieldWeight in 3398, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3398)
      0.25 = coord(1/4)
    
    Abstract
    Knowledge representation, which is today used in database applications, artificial intelligence (AI), software engineering and many other disciplines of computer science has deep roots in logic and philosophy. In the beginning, there was Aristotle (384 bc-322 bc) who developed logic as a precise method for reasoning about knowledge. Syllogisms were introduced as formal patterns for representing special figures of logical deductions. According to Aristotle, the subject of ontology is the study of categories of things that exist or may exist in some domain. In modern times, Descartes considered the human brain as a store of knowledge representation. Recognition was made possible by an isomorphic correspondence between internal geometrical representations (ideae) and external situations and events. Leibniz was deeply influenced by these traditions. In his mathesis universalis, he required a universal formal language (lingua universalis) to represent human thinking by calculation procedures and to implement them by means of mechanical calculating machines. An ars iudicandi should allow every problem to be decided by an algorithm after representation in numeric symbols. An ars iveniendi should enable users to seek and enumerate desired data and solutions of problems. In the age of mechanics, knowledge representation was reduced to mechanical calculation procedures. In the twentieth century, computational cognitivism arose in the wake of Turing's theory of computability. In its functionalism, the hardware of a computer is related to the wetware of the human brain. The mind is understood as the software of a computer.
  16. Kent, R.E.: ¬The IFF foundation for ontological knowledge organization (2003) 0.02
    0.023567071 = product of:
      0.094268285 = sum of:
        0.094268285 = weight(_text_:logic in 5527) [ClassicSimilarity], result of:
          0.094268285 = score(doc=5527,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 5527, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=5527)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses an axiomatic approach for the semantic integration of ontologies, an approach that extends to first order logic, a previous approach based on information flow. This axiomatic approach is represented in the Information Flow Framework (IFF), a metalevel framework for organizing the information that appears in digital libraries, distributed databases and ontologies. The paper argues that the semantic integration of ontologies is the two-step process of alignment and unification. Ontological alignment consists of the sharing of common terminology and semantics through a mediating ontology. Ontological unification, concentrated in a virtual ontology of community connections, is fusion of the alignment diagram of participant community ontologies-the quotient of the sum of the participant portals modulo the ontological alignment structure.
  17. Schulz, S.; Schober, D.; Tudose, I.; Stenzhorn, H.: ¬The pitfalls of thesaurus ontologization : the case of the NCI thesaurus (2010) 0.02
    0.023567071 = product of:
      0.094268285 = sum of:
        0.094268285 = weight(_text_:logic in 4885) [ClassicSimilarity], result of:
          0.094268285 = score(doc=4885,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 4885, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=4885)
      0.25 = coord(1/4)
    
    Abstract
    Thesauri that are "ontologized" into OWL-DL semantics are highly amenable to modeling errors resulting from falsely interpreting existential restrictions. We investigated the OWL-DL representation of the NCI Thesaurus (NCIT) in order to assess the correctness of existential restrictions. A random sample of 354 axioms using the someValuesFrom operator was taken. According to a rating performed by two domain experts, roughly half of these examples, and in consequence more than 76,000 axioms in the OWL-DL version, make incorrect assertions if interpreted according to description logics semantics. These axioms therefore constitute a huge source for unintended models, rendering most logic-based reasoning unreliable. After identifying typical error patterns we discuss some possible improvements. Our recommendation is to either amend the problematic axioms in the OWL-DL formalization or to consider some less strict representational format.
  18. Wang, Y.-H.; Jhuo, P.-S.: ¬A semantic faceted search with rule-based inference (2009) 0.02
    0.023567071 = product of:
      0.094268285 = sum of:
        0.094268285 = weight(_text_:logic in 540) [ClassicSimilarity], result of:
          0.094268285 = score(doc=540,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=540)
      0.25 = coord(1/4)
    
    Abstract
    Semantic Search has become an active research of Semantic Web in recent years. The classification methodology plays a pretty critical role in the beginning of search process to disambiguate irrelevant information. However, the applications related to Folksonomy suffer from many obstacles. This study attempts to eliminate the problems resulted from Folksonomy using existing semantic technology. We also focus on how to effectively integrate heterogeneous ontologies over the Internet to acquire the integrity of domain knowledge. A faceted logic layer is abstracted in order to strengthen category framework and organize existing available ontologies according to a series of steps based on the methodology of faceted classification and ontology construction. The result showed that our approach can facilitate the integration of inconsistent or even heterogeneous ontologies. This paper also generalizes the principles of picking appropriate facets with which our facet browser completely complies so that better semantic search result can be obtained.
  19. Arenas, M.; Cuenca Grau, B.; Kharlamov, E.; Marciuska, S.; Zheleznyakov, D.: Faceted search over ontology-enhanced RDF data (2014) 0.02
    0.023567071 = product of:
      0.094268285 = sum of:
        0.094268285 = weight(_text_:logic in 2207) [ClassicSimilarity], result of:
          0.094268285 = score(doc=2207,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 2207, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=2207)
      0.25 = coord(1/4)
    
    Abstract
    An increasing number of applications rely on RDF, OWL2, and SPARQL for storing and querying data. SPARQL, however, is not targeted towards end-users, and suitable query interfaces are needed. Faceted search is a prominent approach for end-user data access, and several RDF-based faceted search systems have been developed. There is, however, a lack of rigorous theoretical underpinning for faceted search in the context of RDF and OWL2. In this paper, we provide such solid foundations. We formalise faceted interfaces for this context, identify a fragment of first-order logic capturing the underlying queries, and study the complexity of answering such queries for RDF and OWL2 profiles. We then study interface generation and update, and devise efficiently implementable algorithms. Finally, we have implemented and tested our faceted search algorithms for scalability, with encouraging results.
  20. Herre, H.: General Formal Ontology (GFO) : a foundational ontology for conceptual modelling (2010) 0.02
    0.02221925 = product of:
      0.088877 = sum of:
        0.088877 = weight(_text_:logic in 771) [ClassicSimilarity], result of:
          0.088877 = score(doc=771,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.37690377 = fieldWeight in 771, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.03125 = fieldNorm(doc=771)
      0.25 = coord(1/4)
    
    Abstract
    Research in ontology has in recent years become widespread in the field of information systems, in distinct areas of sciences, in business, in economy, and in industry. The importance of ontologies is increasingly recognized in fields diverse as in e-commerce, semantic web, enterprise, information integration, qualitative modelling of physical systems, natural language processing, knowledge engineering, and databases. Ontologies provide formal specifications and harmonized definitions of concepts used to represent knowledge of specific domains. An ontology supplies a unifying framework for communication and establishes the basis of the knowledge about a specific domain. The term ontology has two meanings, it denotes, on the one hand, a research area, on the other hand, a system of organized knowledge. A system of knowledge may exhibit various degrees of formality; in the strongest sense it is an axiomatized and formally represented theory. which is denoted throughout this paper by the term axiomatized ontology. We use the term formal ontology to name an area of research which is becoming a science similar as formal or mathematical logic. Formal ontology is an evolving science which is concerned with the systematic development of axiomatic theories describing forms, modes, and views of being of the world at different levels of abstraction and granularity. Formal ontology combines the methods of mathematical logic with principles of philosophy, but also with the methods of artificial intelligence and linguistics. At themost general level of abstraction, formal ontology is concerned with those categories that apply to every area of the world. The application of formal ontology to domains at different levels of generality yields knowledge systems which are called, according to the level of abstraction, Top Level Ontologies or Foundational Ontologies, Core Domain or Domain Ontologies. Top level or foundational ontologies apply to every area of the world, in contrast to the various Generic, Domain Core or Domain Ontologies, which are associated to more restricted fields of interest. A foundational ontology can serve as a unifying framework for representation and integration of knowledge and may support the communication and harmonisation of conceptual systems. The current paper presents an overview about the current stage of the foundational ontology GFO.

Authors

Years

Languages

  • e 65
  • d 11

Types

  • a 55
  • el 21
  • m 6
  • x 5
  • s 3
  • n 1
  • r 1
  • More… Less…