Search (120 results, page 1 of 6)

  • × theme_ss:"Wissensrepräsentation"
  1. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.07
    0.06674814 = product of:
      0.13349628 = sum of:
        0.13349628 = sum of:
          0.07169498 = weight(_text_:i in 6089) [ClassicSimilarity], result of:
            0.07169498 = score(doc=6089,freq=2.0), product of:
              0.17204544 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045614466 = queryNorm
              0.41672117 = fieldWeight in 6089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.078125 = fieldNorm(doc=6089)
          0.061801303 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
            0.061801303 = score(doc=6089,freq=2.0), product of:
              0.15973409 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045614466 = queryNorm
              0.38690117 = fieldWeight in 6089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=6089)
      0.5 = coord(1/2)
    
    Pages
    S.11-22
    Series
    Propozycje i Materialy; 6
  2. Onofri, A.: Concepts in context (2013) 0.05
    0.050794553 = sum of:
      0.0073317797 = product of:
        0.036658898 = sum of:
          0.036658898 = weight(_text_:authors in 1077) [ClassicSimilarity], result of:
            0.036658898 = score(doc=1077,freq=2.0), product of:
              0.20794787 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.045614466 = queryNorm
              0.17628889 = fieldWeight in 1077, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1077)
        0.2 = coord(1/5)
      0.043462772 = product of:
        0.086925544 = sum of:
          0.086925544 = weight(_text_:i in 1077) [ClassicSimilarity], result of:
            0.086925544 = score(doc=1077,freq=24.0), product of:
              0.17204544 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045614466 = queryNorm
              0.5052476 = fieldWeight in 1077, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1077)
        0.5 = coord(1/2)
    
    Abstract
    My thesis discusses two related problems that have taken center stage in the recent literature on concepts: 1) What are the individuation conditions of concepts? Under what conditions is a concept Cv(1) the same concept as a concept Cv(2)? 2) What are the possession conditions of concepts? What conditions must be satisfied for a thinker to have a concept C? The thesis defends a novel account of concepts, which I call "pluralist-contextualist": 1) Pluralism: Different concepts have different kinds of individuation and possession conditions: some concepts are individuated more "coarsely", have less demanding possession conditions and are widely shared, while other concepts are individuated more "finely" and not shared. 2) Contextualism: When a speaker ascribes a propositional attitude to a subject S, or uses his ascription to explain/predict S's behavior, the speaker's intentions in the relevant context determine the correct individuation conditions for the concepts involved in his report. In chapters 1-3 I defend a contextualist, non-Millian theory of propositional attitude ascriptions. Then, I show how contextualism can be used to offer a novel perspective on the problem of concept individuation/possession. More specifically, I employ contextualism to provide a new, more effective argument for Fodor's "publicity principle": if contextualism is true, then certain specific concepts must be shared in order for interpersonally applicable psychological generalizations to be possible. In chapters 4-5 I raise a tension between publicity and another widely endorsed principle, the "Fregean constraint" (FC): subjects who are unaware of certain identity facts and find themselves in so-called "Frege cases" must have distinct concepts for the relevant object x. For instance: the ancient astronomers had distinct concepts (HESPERUS/PHOSPHORUS) for the same object (the planet Venus). First, I examine some leading theories of concepts and argue that they cannot meet both of our constraints at the same time. Then, I offer principled reasons to think that no theory can satisfy (FC) while also respecting publicity. (FC) appears to require a form of holism, on which a concept is individuated by its global inferential role in a subject S and can thus only be shared by someone who has exactly the same inferential dispositions as S. This explains the tension between publicity and (FC), since holism is clearly incompatible with concept shareability. To solve the tension, I suggest adopting my pluralist-contextualist proposal: concepts involved in Frege cases are holistically individuated and not public, while other concepts are more coarsely individuated and widely shared; given this "plurality" of concepts, we will then need contextual factors (speakers' intentions) to "select" the specific concepts to be employed in our intentional generalizations in the relevant contexts. In chapter 6 I develop the view further by contrasting it with some rival accounts. First, I examine a very different kind of pluralism about concepts, which has been recently defended by Daniel Weiskopf, and argue that it is insufficiently radical. Then, I consider the inferentialist accounts defended by authors like Peacocke, Rey and Jackson. Such views, I argue, are committed to an implausible picture of reference determination, on which our inferential dispositions fix the reference of our concepts: this leads to wrong predictions in all those cases of scientific disagreement where two parties have very different inferential dispositions and yet seem to refer to the same natural kind.
  3. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.05
    0.045894258 = sum of:
      0.012699015 = product of:
        0.06349508 = sum of:
          0.06349508 = weight(_text_:authors in 2362) [ClassicSimilarity], result of:
            0.06349508 = score(doc=2362,freq=6.0), product of:
              0.20794787 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.045614466 = queryNorm
              0.30534133 = fieldWeight in 2362, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2362)
        0.2 = coord(1/5)
      0.033195242 = product of:
        0.066390485 = sum of:
          0.066390485 = weight(_text_:i in 2362) [ClassicSimilarity], result of:
            0.066390485 = score(doc=2362,freq=14.0), product of:
              0.17204544 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045614466 = queryNorm
              0.38588926 = fieldWeight in 2362, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2362)
        0.5 = coord(1/2)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  4. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.04
    0.040048882 = product of:
      0.080097765 = sum of:
        0.080097765 = sum of:
          0.04301699 = weight(_text_:i in 2418) [ClassicSimilarity], result of:
            0.04301699 = score(doc=2418,freq=2.0), product of:
              0.17204544 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045614466 = queryNorm
              0.25003272 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.03708078 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
            0.03708078 = score(doc=2418,freq=2.0), product of:
              0.15973409 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045614466 = queryNorm
              0.23214069 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
      0.5 = coord(1/2)
    
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  5. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.04
    0.040048882 = product of:
      0.080097765 = sum of:
        0.080097765 = sum of:
          0.04301699 = weight(_text_:i in 2623) [ClassicSimilarity], result of:
            0.04301699 = score(doc=2623,freq=2.0), product of:
              0.17204544 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045614466 = queryNorm
              0.25003272 = fieldWeight in 2623, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.046875 = fieldNorm(doc=2623)
          0.03708078 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
            0.03708078 = score(doc=2623,freq=2.0), product of:
              0.15973409 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045614466 = queryNorm
              0.23214069 = fieldWeight in 2623, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2623)
      0.5 = coord(1/2)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.03
    0.028397717 = sum of:
      0.010473971 = product of:
        0.052369855 = sum of:
          0.052369855 = weight(_text_:authors in 2257) [ClassicSimilarity], result of:
            0.052369855 = score(doc=2257,freq=2.0), product of:
              0.20794787 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.045614466 = queryNorm
              0.25184128 = fieldWeight in 2257, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2257)
        0.2 = coord(1/5)
      0.017923744 = product of:
        0.03584749 = sum of:
          0.03584749 = weight(_text_:i in 2257) [ClassicSimilarity], result of:
            0.03584749 = score(doc=2257,freq=2.0), product of:
              0.17204544 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045614466 = queryNorm
              0.20836058 = fieldWeight in 2257, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2257)
        0.5 = coord(1/2)
    
    Abstract
    The Future of Information Architecture examines issues surrounding why information is processed, stored and applied in the way that it has, since time immemorial. Contrary to the conventional wisdom held by many scholars in human history, the recurrent debate on the explanation of the most basic categories of information (eg space, time causation, quality, quantity) has been misconstrued, to the effect that there exists some deeper categories and principles behind these categories of information - with enormous implications for our understanding of reality in general. To understand this, the book is organised in to four main parts: Part I begins with the vital question concerning the role of information within the context of the larger theoretical debate in the literature. Part II provides a critical examination of the nature of data taxonomy from the main perspectives of culture, society, nature and the mind. Part III constructively invesitgates the world of information network from the main perspectives of culture, society, nature and the mind. Part IV proposes six main theses in the authors synthetic theory of information architecture, namely, (a) the first thesis on the simpleness-complicatedness principle, (b) the second thesis on the exactness-vagueness principle (c) the third thesis on the slowness-quickness principle (d) the fourth thesis on the order-chaos principle, (e) the fifth thesis on the symmetry-asymmetry principle, and (f) the sixth thesis on the post-human stage.
  7. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.03
    0.026873423 = sum of:
      0.014513162 = product of:
        0.07256581 = sum of:
          0.07256581 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.07256581 = score(doc=1634,freq=6.0), product of:
              0.20794787 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.045614466 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.2 = coord(1/5)
      0.012360261 = product of:
        0.024720522 = sum of:
          0.024720522 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.024720522 = score(doc=1634,freq=2.0), product of:
              0.15973409 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045614466 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  8. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.03
    0.026873423 = sum of:
      0.014513162 = product of:
        0.07256581 = sum of:
          0.07256581 = weight(_text_:authors in 179) [ClassicSimilarity], result of:
            0.07256581 = score(doc=179,freq=6.0), product of:
              0.20794787 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.045614466 = queryNorm
              0.34896153 = fieldWeight in 179, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
        0.2 = coord(1/5)
      0.012360261 = product of:
        0.024720522 = sum of:
          0.024720522 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.024720522 = score(doc=179,freq=2.0), product of:
              0.15973409 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045614466 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
        0.5 = coord(1/2)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  9. Barsalou, L.W.: Frames, concepts, and conceptual fields (1992) 0.03
    0.025348004 = product of:
      0.050696008 = sum of:
        0.050696008 = product of:
          0.101392016 = sum of:
            0.101392016 = weight(_text_:i in 3217) [ClassicSimilarity], result of:
              0.101392016 = score(doc=3217,freq=16.0), product of:
                0.17204544 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045614466 = queryNorm
                0.58933276 = fieldWeight in 3217, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3217)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this chapter I propose that frames provide the fundamental representation of knowledge in human cognition. In the first section, I raise problems with the feature list representations often found in theories of knowledge, and I sketch the solutions that frames provide to them. In the second section, I examine the three fundamental concepts of frames: attribute-value sets, structural invariants, and constraints. Because frames also represents the attributes, values, structural invariants, and constraints within a frame, the mechanism that constructs frames builds them recursively. The frame theory I propose borrows heavily from previous frame theories, although its collection of representational components is somewhat unique. Furthermore, frame theorists generally assume that frames are rigid configurations of independent attributes, whereas I propose that frames are dynamic relational structures whose form is flexible and context dependent. In the third section, I illustrate how frames support a wide variety of representational tasks central to conceptual processing in natural and artificial intelligence. Frames can represent exemplars and propositions, prototypes and membership, subordinates and taxonomies. Frames can also represent conceptual combinations, event sequences, rules, and plans. In the fourth section, I show how frames define the extent of conceptual fields and how they provide a powerful productive mechanism for generating specific concepts within a field.
  10. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.02
    0.021734359 = product of:
      0.043468717 = sum of:
        0.043468717 = product of:
          0.21734358 = sum of:
            0.21734358 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.21734358 = score(doc=400,freq=2.0), product of:
                0.38671994 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045614466 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  11. Mainz, I.; Weller, K.; Paulsen, I.; Mainz, D.; Kohl, J.; Haeseler, A. von: Ontoverse : collaborative ontology engineering for the life sciences (2008) 0.02
    0.020278404 = product of:
      0.040556807 = sum of:
        0.040556807 = product of:
          0.081113614 = sum of:
            0.081113614 = weight(_text_:i in 1594) [ClassicSimilarity], result of:
              0.081113614 = score(doc=1594,freq=4.0), product of:
                0.17204544 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045614466 = queryNorm
                0.4714662 = fieldWeight in 1594, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1594)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Weller, K.; Peters, I.: Reconsidering relationships for knowledge representation (2007) 0.02
    0.020278404 = product of:
      0.040556807 = sum of:
        0.040556807 = product of:
          0.081113614 = sum of:
            0.081113614 = weight(_text_:i in 216) [ClassicSimilarity], result of:
              0.081113614 = score(doc=216,freq=4.0), product of:
                0.17204544 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045614466 = queryNorm
                0.4714662 = fieldWeight in 216, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=216)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Proceedings of I-Know '07, Graz, September 5-7
  13. Park, O.n.: Opening ontology design : a study of the implications of knowledge organization for ontology design (2008) 0.02
    0.018626902 = product of:
      0.037253805 = sum of:
        0.037253805 = product of:
          0.07450761 = sum of:
            0.07450761 = weight(_text_:i in 2489) [ClassicSimilarity], result of:
              0.07450761 = score(doc=2489,freq=6.0), product of:
                0.17204544 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045614466 = queryNorm
                0.43306938 = fieldWeight in 2489, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2489)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is proposed that sufficient research into ontology design has not been achieved and that this deficiency has led to the insufficiency of ontology in reinforcing its communications frameworks, knowledge sharing and re-use applications. In order to diagnose the problems of ontology research, I first survey the notion of ontology in the context of ontology design, based on a Means-Ends tool provided by a Cognitive Work Analysis. The potential contributions of knowledge organization in library and information sciences that can be used to improve the limitations of ontology research are demonstrated. I propose a context-centered view as an approach for ontology design, and present faceted classification as an appropriate method for structuring ontology. In addition, I also provides a case study of wine ontology in order to demonstrate how knowledge organization approaches in library and information science can improve ontology design.
  14. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.02
    0.018147008 = sum of:
      0.0073317797 = product of:
        0.036658898 = sum of:
          0.036658898 = weight(_text_:authors in 1633) [ClassicSimilarity], result of:
            0.036658898 = score(doc=1633,freq=2.0), product of:
              0.20794787 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.045614466 = queryNorm
              0.17628889 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
        0.2 = coord(1/5)
      0.010815228 = product of:
        0.021630457 = sum of:
          0.021630457 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.021630457 = score(doc=1633,freq=2.0), product of:
              0.15973409 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045614466 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  15. Sánchez, M.F.: Semantically enhanced Information Retrieval : an ontology-based approach (2006) 0.02
    0.017923744 = product of:
      0.03584749 = sum of:
        0.03584749 = product of:
          0.07169498 = sum of:
            0.07169498 = weight(_text_:i in 4327) [ClassicSimilarity], result of:
              0.07169498 = score(doc=4327,freq=2.0), product of:
                0.17204544 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045614466 = queryNorm
                0.41672117 = fieldWeight in 4327, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4327)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Part I. Analyzing the state of the art - What is semantic search? Part II. The proposal - An ontology-based IR model - Semantic retrieval on the Web Part III. Extensions - Semantic knowledge gateway - Coping with knowledge incompleteness
  16. Lukasiewicz, T.: Uncertainty reasoning for the Semantic Web (2017) 0.02
    0.017743602 = product of:
      0.035487205 = sum of:
        0.035487205 = product of:
          0.07097441 = sum of:
            0.07097441 = weight(_text_:i in 3939) [ClassicSimilarity], result of:
              0.07097441 = score(doc=3939,freq=4.0), product of:
                0.17204544 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045614466 = queryNorm
                0.41253293 = fieldWeight in 3939, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3939)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Semantic Web has attracted much attention, both from academia and industry. An important role in research towards the Semantic Web is played by formalisms and technologies for handling uncertainty and/or vagueness. In this paper, I first provide some motivating examples for handling uncertainty and/or vagueness in the Semantic Web. I then give an overview of some own formalisms for handling uncertainty and/or vagueness in the Semantic Web.
  17. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.015450326 = product of:
      0.030900652 = sum of:
        0.030900652 = product of:
          0.061801303 = sum of:
            0.061801303 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.061801303 = score(doc=5576,freq=2.0), product of:
                0.15973409 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045614466 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13.12.2017 14:17:22
  18. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.015450326 = product of:
      0.030900652 = sum of:
        0.030900652 = product of:
          0.061801303 = sum of:
            0.061801303 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.061801303 = score(doc=539,freq=2.0), product of:
                0.15973409 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045614466 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 13:22:07
  19. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.02
    0.015450326 = product of:
      0.030900652 = sum of:
        0.030900652 = product of:
          0.061801303 = sum of:
            0.061801303 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.061801303 = score(doc=3406,freq=2.0), product of:
                0.15973409 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045614466 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 5.2010 16:22:35
  20. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.015450326 = product of:
      0.030900652 = sum of:
        0.030900652 = product of:
          0.061801303 = sum of:
            0.061801303 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.061801303 = score(doc=4523,freq=2.0), product of:
                0.15973409 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045614466 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27

Authors

Years

Languages

  • e 100
  • d 17
  • sp 1
  • More… Less…

Types

  • a 90
  • el 34
  • m 9
  • x 6
  • s 4
  • n 3
  • r 1
  • More… Less…

Subjects