Search (176 results, page 1 of 9)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.21
    0.207019 = product of:
      0.414038 = sum of:
        0.1035095 = product of:
          0.3105285 = sum of:
            0.3105285 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.3105285 = score(doc=1826,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.3105285 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.3105285 = score(doc=1826,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(2/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.17
    0.1656152 = product of:
      0.3312304 = sum of:
        0.0828076 = product of:
          0.24842279 = sum of:
            0.24842279 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.24842279 = score(doc=230,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.24842279 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24842279 = score(doc=230,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.10
    0.1035095 = product of:
      0.207019 = sum of:
        0.05175475 = product of:
          0.15526424 = sum of:
            0.15526424 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15526424 = score(doc=4388,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.15526424 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15526424 = score(doc=4388,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5 = coord(2/4)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Priss, U.: Description logic and faceted knowledge representation (1999) 0.07
    0.07195565 = product of:
      0.1439113 = sum of:
        0.1333155 = weight(_text_:logic in 2655) [ClassicSimilarity], result of:
          0.1333155 = score(doc=2655,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.56535566 = fieldWeight in 2655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.010595793 = product of:
          0.031787377 = sum of:
            0.031787377 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.031787377 = score(doc=2655,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  5. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.05
    0.052697577 = product of:
      0.2107903 = sum of:
        0.2107903 = weight(_text_:logic in 761) [ClassicSimilarity], result of:
          0.2107903 = score(doc=761,freq=10.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.89390576 = fieldWeight in 761, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  6. Tyner, R.: Sink or swim : Internet search tools & techniques (1996) 0.05
    0.047134142 = product of:
      0.18853657 = sum of:
        0.18853657 = weight(_text_:logic in 5676) [ClassicSimilarity], result of:
          0.18853657 = score(doc=5676,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.7995336 = fieldWeight in 5676, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.09375 = fieldNorm(doc=5676)
      0.25 = coord(1/4)
    
    Abstract
    Very good site that includes search basics, Boolean logic. Reviews all the popular search engines and includes Size, Currency, Search options, and Results
  7. Cregan, A.: ¬An OWL DL construction for the ISO Topic Map Data Model (2005) 0.04
    0.03927845 = product of:
      0.1571138 = sum of:
        0.1571138 = weight(_text_:logic in 4718) [ClassicSimilarity], result of:
          0.1571138 = score(doc=4718,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.666278 = fieldWeight in 4718, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4718)
      0.25 = coord(1/4)
    
    Abstract
    Both Topic Maps and the W3C Semantic Web technologies are meta-level semantic maps describing relationships between information resources. Previous attempts at interoperability between XTM Topic Maps and RDF have proved problematic. The ISO's drafting of an explicit Topic Map Data Model [TMDM 05] combined with the advent of the W3C's XML and RDFbased Description Logic-equivalent Web Ontology Language [OWLDL 04] now provides the means for the construction of an unambiguous semantic model to represent Topic Maps, in a form that is equivalent to a Description Logic representation. This paper describes the construction of the proposed TMDM ISO Topic Map Standard in OWL DL (Description Logic equivalent) form. The construction is claimed to exactly match the features of the proposed TMDM. The intention is that the topic map constructs described herein, once officially published on the world-wide web, may be used by Topic Map authors to construct their Topic Maps in OWL DL. The advantage of OWL DL Topic Map construction over XTM, the existing XML-based DTD standard, is that OWL DL allows many constraints to be explicitly stated. OWL DL's suite of tools, although currently still somewhat immature, will provide the means for both querying and enforcing constraints. This goes a long way towards fulfilling the requirements for a Topic Map Query Language (TMQL) and Constraint Language (TMCL), which the Topic Map Community may choose to expend effort on extending. Additionally, OWL DL has a clearly defined formal semantics (Description Logic ref)
  8. Dervin, B.: Chaos, order, and sense-making : a proposed theory for information design (1995) 0.03
    0.031422764 = product of:
      0.12569106 = sum of:
        0.12569106 = weight(_text_:logic in 3291) [ClassicSimilarity], result of:
          0.12569106 = score(doc=3291,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.5330224 = fieldWeight in 3291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0625 = fieldNorm(doc=3291)
      0.25 = coord(1/4)
    
    Abstract
    The term information design is being offered in this volume as a disignator of a new area of activity. Part of the logic inherent in the presentation is the assumption that as a species we face altered circumstances which demand this new practice
  9. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.03
    0.030740254 = product of:
      0.122961015 = sum of:
        0.122961015 = weight(_text_:logic in 2362) [ClassicSimilarity], result of:
          0.122961015 = score(doc=2362,freq=10.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.52144504 = fieldWeight in 2362, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.25 = coord(1/4)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  10. Schulz, S.; Schober, D.; Tudose, I.; Stenzhorn, H.: ¬The pitfalls of thesaurus ontologization : the case of the NCI thesaurus (2010) 0.02
    0.023567071 = product of:
      0.094268285 = sum of:
        0.094268285 = weight(_text_:logic in 4885) [ClassicSimilarity], result of:
          0.094268285 = score(doc=4885,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 4885, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=4885)
      0.25 = coord(1/4)
    
    Abstract
    Thesauri that are "ontologized" into OWL-DL semantics are highly amenable to modeling errors resulting from falsely interpreting existential restrictions. We investigated the OWL-DL representation of the NCI Thesaurus (NCIT) in order to assess the correctness of existential restrictions. A random sample of 354 axioms using the someValuesFrom operator was taken. According to a rating performed by two domain experts, roughly half of these examples, and in consequence more than 76,000 axioms in the OWL-DL version, make incorrect assertions if interpreted according to description logics semantics. These axioms therefore constitute a huge source for unintended models, rendering most logic-based reasoning unreliable. After identifying typical error patterns we discuss some possible improvements. Our recommendation is to either amend the problematic axioms in the OWL-DL formalization or to consider some less strict representational format.
  11. Wang, Y.-H.; Jhuo, P.-S.: ¬A semantic faceted search with rule-based inference (2009) 0.02
    0.023567071 = product of:
      0.094268285 = sum of:
        0.094268285 = weight(_text_:logic in 540) [ClassicSimilarity], result of:
          0.094268285 = score(doc=540,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=540)
      0.25 = coord(1/4)
    
    Abstract
    Semantic Search has become an active research of Semantic Web in recent years. The classification methodology plays a pretty critical role in the beginning of search process to disambiguate irrelevant information. However, the applications related to Folksonomy suffer from many obstacles. This study attempts to eliminate the problems resulted from Folksonomy using existing semantic technology. We also focus on how to effectively integrate heterogeneous ontologies over the Internet to acquire the integrity of domain knowledge. A faceted logic layer is abstracted in order to strengthen category framework and organize existing available ontologies according to a series of steps based on the methodology of faceted classification and ontology construction. The result showed that our approach can facilitate the integration of inconsistent or even heterogeneous ontologies. This paper also generalizes the principles of picking appropriate facets with which our facet browser completely complies so that better semantic search result can be obtained.
  12. Arenas, M.; Cuenca Grau, B.; Kharlamov, E.; Marciuska, S.; Zheleznyakov, D.: Faceted search over ontology-enhanced RDF data (2014) 0.02
    0.023567071 = product of:
      0.094268285 = sum of:
        0.094268285 = weight(_text_:logic in 2207) [ClassicSimilarity], result of:
          0.094268285 = score(doc=2207,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 2207, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=2207)
      0.25 = coord(1/4)
    
    Abstract
    An increasing number of applications rely on RDF, OWL2, and SPARQL for storing and querying data. SPARQL, however, is not targeted towards end-users, and suitable query interfaces are needed. Faceted search is a prominent approach for end-user data access, and several RDF-based faceted search systems have been developed. There is, however, a lack of rigorous theoretical underpinning for faceted search in the context of RDF and OWL2. In this paper, we provide such solid foundations. We formalise faceted interfaces for this context, identify a fragment of first-order logic capturing the underlying queries, and study the complexity of answering such queries for RDF and OWL2 profiles. We then study interface generation and update, and devise efficiently implementable algorithms. Finally, we have implemented and tested our faceted search algorithms for scalability, with encouraging results.
  13. Kahlawi, A,: ¬An ontology driven ESCO LOD quality enhancement (2020) 0.02
    0.023567071 = product of:
      0.094268285 = sum of:
        0.094268285 = weight(_text_:logic in 5959) [ClassicSimilarity], result of:
          0.094268285 = score(doc=5959,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 5959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=5959)
      0.25 = coord(1/4)
    
    Abstract
    The labor market is a system that is complex and difficult to manage. To overcome this challenge, the European Union has launched the ESCO project which is a language that aims to describe this labor market. In order to support the spread of this project, its dataset was presented as linked open data (LOD). Since LOD is usable and reusable, a set of conditions have to be met. First, LOD must be feasible and high quality. In addition, it must provide the user with the right answers, and it has to be built according to a clear and correct structure. This study investigates the LOD of ESCO, focusing on data quality and data structure. The former is evaluated through applying a set of SPARQL queries. This provides solutions to improve its quality via a set of rules built in first order logic. This process was conducted based on a new proposed ESCO ontology.
  14. Cecchini, C.; Zanchetta, C.; Paolo Borin, P.; Xausa, G.: Computational design e sistemi di classificazione per la verifica predittiva delle prestazioni di sistema degli organismi edilizi : Computational design and classification systems to support predictive checking of performance of building systems (2017) 0.02
    0.019639226 = product of:
      0.0785569 = sum of:
        0.0785569 = weight(_text_:logic in 5856) [ClassicSimilarity], result of:
          0.0785569 = score(doc=5856,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.333139 = fieldWeight in 5856, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5856)
      0.25 = coord(1/4)
    
    Abstract
    The aim of control the economic, social and environmental aspects connected to the construction of a building imposes a systematic approach for which t is necessary to make test models aimed to a coordinate analysis of different and independent performance issues. BIM technology, referring to interoperable informative models, offers a significant operative basis to achieve this necessity. In most of the cases, informative models concentrate on a product-based digital models collection built in a virtual space, more than on the simulation of their relational behaviors. This relation, instead, is the most important aspect of modelling because it marks and characterizes the interactions that can define the building as a system. This study presents the use of standard classification systems as tools for both the activation and validation of an integrated performance-based building process. By referring categories and types of the informative model to the codes of a technological and performance-based classification system, it is possible to link and coordinate functional units and their elements with the indications required by the AEC standards. In this way, progressing with an incremental logic, it is possible to achieve the management of the requirements of the whole building and the monitoring of the fulfilment of design objectives and specific normative guidelines.
  15. Pankowski, T.: Ontological databases with faceted queries (2022) 0.02
    0.019639226 = product of:
      0.0785569 = sum of:
        0.0785569 = weight(_text_:logic in 666) [ClassicSimilarity], result of:
          0.0785569 = score(doc=666,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.333139 = fieldWeight in 666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=666)
      0.25 = coord(1/4)
    
    Abstract
    The success of the use of ontology-based systems depends on efficient and user-friendly methods of formulating queries against the ontology. We propose a method to query a class of ontologies, called facet ontologies ( fac-ontologies ), using a faceted human-oriented approach. A fac-ontology has two important features: (a) a hierarchical view of it can be defined as a nested facet over this ontology and the view can be used as a faceted interface to create queries and to explore the ontology; (b) the ontology can be converted into an ontological database , the ABox of which is stored in a database, and the faceted queries are evaluated against this database. We show that the proposed faceted interface makes it possible to formulate queries that are semantically equivalent to $${\mathcal {SROIQ}}^{Fac}$$ SROIQ Fac , a limited version of the $${\mathcal {SROIQ}}$$ SROIQ description logic. The TBox of a fac-ontology is divided into a set of rules defining intensional predicates and a set of constraint rules to be satisfied by the database. We identify a class of so-called reflexive weak cycles in a set of constraint rules and propose a method to deal with them in the chase procedure. The considerations are illustrated with solutions implemented in the DAFO system ( data access based on faceted queries over ontologies ).
  16. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.01
    0.012938688 = product of:
      0.05175475 = sum of:
        0.05175475 = product of:
          0.15526424 = sum of:
            0.15526424 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.15526424 = score(doc=5669,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  17. Lavoie, B.; Henry, G.; Dempsey, L.: ¬A service framework for libraries (2006) 0.01
    0.011783536 = product of:
      0.047134142 = sum of:
        0.047134142 = weight(_text_:logic in 1175) [ClassicSimilarity], result of:
          0.047134142 = score(doc=1175,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.1998834 = fieldWeight in 1175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1175)
      0.25 = coord(1/4)
    
    Abstract
    Libraries have not been idle in the face of the changes re-shaping their environments: in fact, much work is underway and major advances have already been achieved. But these efforts lack a unifying framework, a means for libraries, as a community, to gather the strands of individual projects and weave them into a cohesive whole. A framework of this kind would help in articulating collective expectations, assessing progress, and identifying critical gaps. As the information landscape continually shifts and changes, a framework would promote the design and implementation of flexible, interoperable library systems that can respond more quickly to the needs of libraries in serving their constituents. It will provide a port of entry for organizations outside the library domain, and help them understand the critical points of contact between their services and those of libraries. Perhaps most importantly, a framework would assist libraries in strategic planning. It would provide a tool to help them establish priorities, guide investment, and anticipate future needs in uncertain environments. It was in this context, and in recognition of efforts already underway to align library services with emerging information environments, that the Digital Library Federation (DLF) in 2005 sponsored the formation of the Service Framework Group (SFG) [1] to consider a more systematic, community-based approach to aligning the functions of libraries with increasing automation in fulfilling the needs of information environments. The SFG seeks to understand and model the research library in today's environment, by developing a framework within which the services offered by libraries, represented both as business logic and computer processes, can be understood in relation to other parts of the institutional and external information landscape. This framework will help research institutions plan wisely for providing the services needed to meet the current and emerging information needs of their constituents. A service framework is a tool for documenting a shared view of library services in changing environments; communicating it among libraries and others, and applying it to best advantage in meeting library goals. It is a means of focusing attention and organizing discussion. It is not, however, a substitute for innovation and creativity. It does not supply the answers, but facilitates the process by which answers are sought, found, and applied. This paper discusses the SFG's vision of a service framework for libraries, its approach to developing the framework, and the group's work agenda going forward.
  18. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.01
    0.011783536 = product of:
      0.047134142 = sum of:
        0.047134142 = weight(_text_:logic in 3391) [ClassicSimilarity], result of:
          0.047134142 = score(doc=3391,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.1998834 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
      0.25 = coord(1/4)
    
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
  19. Allo, P.; Baumgaertner, B.; D'Alfonso, S.; Fresco, N.; Gobbo, F.; Grubaugh, C.; Iliadis, A.; Illari, P.; Kerr, E.; Primiero, G.; Russo, F.; Schulz, C.; Taddeo, M.; Turilli, M.; Vakarelov, O.; Zenil, H.: ¬The philosophy of information : an introduction (2013) 0.01
    0.011783536 = product of:
      0.047134142 = sum of:
        0.047134142 = weight(_text_:logic in 3380) [ClassicSimilarity], result of:
          0.047134142 = score(doc=3380,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.1998834 = fieldWeight in 3380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3380)
      0.25 = coord(1/4)
    
    Content
    Vgl. auch unter: http://www.socphilinfo.org/teaching/book-pi-intro: "This book serves as the main reference for an undergraduate course on Philosophy of Information. The book is written to be accessible to the typical undergraduate student of Philosophy and does not require propaedeutic courses in Logic, Epistemology or Ethics. Each chapter includes a rich collection of references for the student interested in furthering her understanding of the topics reviewed in the book. The book covers all the main topics of the Philosophy of Information and it should be considered an overview and not a comprehensive, in-depth analysis of a philosophical area. As a consequence, 'The Philosophy of Information: a Simple Introduction' does not contain research material as it is not aimed at graduate students or researchers. The book is available for free in multiple formats and it is updated every twelve months by the team of the p Research Network: Patrick Allo, Bert Baumgaertner, Anthony Beavers, Simon D'Alfonso, Penny Driscoll, Luciano Floridi, Nir Fresco, Carson Grubaugh, Phyllis Illari, Eric Kerr, Giuseppe Primiero, Federica Russo, Christoph Schulz, Mariarosaria Taddeo, Matteo Turilli, Orlin Vakarelov. (*) The version for 2013 is now available as a pdf. The content of this version will soon be integrated in the redesign of the teaching-section. The beta-version from last year will provisionally remain accessible through the Table of Content on this page."
  20. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.01
    0.009819613 = product of:
      0.03927845 = sum of:
        0.03927845 = weight(_text_:logic in 3370) [ClassicSimilarity], result of:
          0.03927845 = score(doc=3370,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.1665695 = fieldWeight in 3370, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3370)
      0.25 = coord(1/4)
    
    Content
    "Last week, I received an email from Yulia Skora in Ukraine who is working on the mapping between UDC Summary and BBK (Bibliographic Library Classification) Summary. It reminded me of yet another challenging area of work. When responding to Yulia I realised that the issues with mapping, for instance, UDC Summary to Dewey Summaries [pdf] are often made more difficult because we have to deal with classification summaries in both systems and we cannot use a known exactMatch in many situations. In 2008, following advice received from colleagues in the HILT project, two of our colleagues quickly mapped 1000 classes of Dewey Summaries to UDC Master Reference File as a whole. This appeared to be relatively simple. The mapping in this case is simply an answer to a question "and how would you say e.g. Art metal work in UDC?" But when in 2009 we realised that we were going to release 2000 classes of UDC Summary as linked data, we decided to wait until we had our UDC Summary set defined and completed to be able to publish it mapped to the Dewey Summaries. As we arrived at this stage, little did we realise how much more complex the reversed mapping of UDC Summary to Dewey Summaries would turn out to be. Mapping the Dewey Summaries to UDC highlighted situations in which the logic and structure of two systems do not agree. Especially because Dewey tends to enumerate combinations of subject and attributes that do not always logically belong together. For instance, 850 Literatures of Italian, Sardinian, Dalmatian, Romanian, Rhaeto-Romanic languages Italian literature. This class mixes languages from three different subgroups of Romance languages. Italian and Sardinian belong to Italo Romance sub-family; Romanian and Dalmatian are Balkan Romance languages and Rhaeto Romance is the third subgroup that includes Friulian Ladin and Romanch. As UDC literature is based on a strict classification of language families, Dewey class 850 has to be mapped to 3 narrower UDC classes 821.131 Literature of Italo-Romance Languages , 821.132 Literature of Rhaeto-Romance languages and 821.135 Literature of Balkan-Romance Languages, or to a broader class 821.13 Literature of Romance languages. Hence we have to be sure that we have all these classes listed in the UDC Summary to be able to express UDC-DDC many-to-one, specific-to-broader relationships.

Years

Languages

  • d 86
  • e 83
  • el 2
  • a 1
  • i 1
  • nl 1
  • More… Less…

Types

  • a 79
  • i 10
  • m 6
  • s 3
  • b 2
  • r 2
  • n 1
  • x 1
  • More… Less…