Search (102 results, page 1 of 6)

  • × theme_ss:"Wissensrepräsentation"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.06391277 = product of:
      0.19173831 = sum of:
        0.15331133 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
          0.15331133 = score(doc=701,freq=2.0), product of:
            0.4091808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04826377 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.038426977 = weight(_text_:problem in 701) [ClassicSimilarity], result of:
          0.038426977 = score(doc=701,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.1875815 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.33333334 = coord(2/6)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.04
    0.03832783 = product of:
      0.22996698 = sum of:
        0.22996698 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
          0.22996698 = score(doc=400,freq=2.0), product of:
            0.4091808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04826377 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.16666667 = coord(1/6)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  3. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie (2005) 0.04
    0.037673578 = product of:
      0.11302073 = sum of:
        0.06724721 = weight(_text_:problem in 1852) [ClassicSimilarity], result of:
          0.06724721 = score(doc=1852,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 1852, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1852)
        0.04577352 = weight(_text_:22 in 1852) [ClassicSimilarity], result of:
          0.04577352 = score(doc=1852,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 1852, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1852)
      0.33333334 = coord(2/6)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:58
  4. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.04
    0.037673578 = product of:
      0.11302073 = sum of:
        0.06724721 = weight(_text_:problem in 4324) [ClassicSimilarity], result of:
          0.06724721 = score(doc=4324,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 4324, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4324)
        0.04577352 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
          0.04577352 = score(doc=4324,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 4324, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4324)
      0.33333334 = coord(2/6)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
  5. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.03
    0.032291643 = product of:
      0.09687492 = sum of:
        0.05764047 = weight(_text_:problem in 2623) [ClassicSimilarity], result of:
          0.05764047 = score(doc=2623,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.03923445 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
          0.03923445 = score(doc=2623,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.23214069 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
      0.33333334 = coord(2/6)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.03
    0.02683342 = product of:
      0.08050026 = sum of:
        0.054343957 = weight(_text_:problem in 4399) [ClassicSimilarity], result of:
          0.054343957 = score(doc=4399,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.2652803 = fieldWeight in 4399, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.026156299 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
          0.026156299 = score(doc=4399,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.15476047 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
      0.33333334 = coord(2/6)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Date
    20. 1.2015 18:30:22
  7. Kottmann, N.; Studer, T.: Improving semantic query answering (2006) 0.03
    0.025617985 = product of:
      0.1537079 = sum of:
        0.1537079 = weight(_text_:problem in 3979) [ClassicSimilarity], result of:
          0.1537079 = score(doc=3979,freq=8.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.750326 = fieldWeight in 3979, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=3979)
      0.16666667 = coord(1/6)
    
    Abstract
    The retrieval problem is one of the main reasoning tasks for knowledge base systems. Given a knowledge base K and a concept C, the retrieval problem consists of finding all individuals a for which K logically entails C(a). We present an approach to answer retrieval queries over (a restriction of) OWL ontologies. Our solution is based on reducing the retrieval problem to a problem of evaluating an SQL query over a database constructed from the original knowledge base. We provide complete answers to retrieval problems. Still, our system performs very well as is shown by a standard benchmark.
  8. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.02555189 = product of:
      0.15331133 = sum of:
        0.15331133 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
          0.15331133 = score(doc=5820,freq=2.0), product of:
            0.4091808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04826377 = queryNorm
            0.3746787 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.16666667 = coord(1/6)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  9. Miller, R.: Three problems in logic-based knowledge representation (2006) 0.02
    0.01663937 = product of:
      0.09983622 = sum of:
        0.09983622 = weight(_text_:problem in 660) [ClassicSimilarity], result of:
          0.09983622 = score(doc=660,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.48735106 = fieldWeight in 660, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=660)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The purpose of this article is to give a non-technical overview of some of the technical progress made recently on tackling three fundamental problems in the area of formal knowledge representation/artificial intelligence. These are the Frame Problem, the Ramification Problem, and the Qualification Problem. The article aims to describe the development of two logic-based languages, the Event Calculus and Modular-E, to address various aspects of these issues. The article also aims to set this work in the wider context of contemporary developments in applied logic, non-monotonic reasoning and formal theories of common sense. Design/methodology/approach - The study applies symbolic logic to model aspects of human knowledge and reasoning. Findings - The article finds that there are fundamental interdependencies between the three problems mentioned above. The conceptual framework shared by the Event Calculus and Modular-E is appropriate for providing principled solutions to them. Originality/value - This article provides an overview of an important approach to dealing with three fundamental issues in artificial intelligence.
  10. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.02
    0.01663937 = product of:
      0.09983622 = sum of:
        0.09983622 = weight(_text_:problem in 3708) [ClassicSimilarity], result of:
          0.09983622 = score(doc=3708,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.48735106 = fieldWeight in 3708, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
      0.16666667 = coord(1/6)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
  11. Sartori, F.; Grazioli, L.: Metadata guiding kowledge engineering : a practical approach (2014) 0.02
    0.01663937 = product of:
      0.09983622 = sum of:
        0.09983622 = weight(_text_:problem in 1572) [ClassicSimilarity], result of:
          0.09983622 = score(doc=1572,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.48735106 = fieldWeight in 1572, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=1572)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper presents an approach to the analysis, design and development of Knowledge Based Systems based on the Knowledge Artifact concept. Knowledge Artifacts can be meant as means to acquire, represent and maintain knowledge involved in complex problem solving activities. A complex problem is typically made of a huge number of parts that are put together according to a first set of constraints (i.e. the procedural knowledge), dependable on the functional properties it must satisfy, and a second set of rules, dependable on what the expert thinks about the problem and how he/she would represent it. The paper illustrates a way to unify both types of knowledge into a Knowledge Artifact, exploiting Ontologies, Influence Nets and Task Structures formalisms and metadata paradigm.
  12. Román, J.H.; Hulin, K.J.; Collins, L.M.; Powell, J.E.: Entity disambiguation using semantic networks (2012) 0.01
    0.013866143 = product of:
      0.083196856 = sum of:
        0.083196856 = weight(_text_:problem in 461) [ClassicSimilarity], result of:
          0.083196856 = score(doc=461,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.4061259 = fieldWeight in 461, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=461)
      0.16666667 = coord(1/6)
    
    Abstract
    A major stumbling block preventing machines from understanding text is the problem of entity disambiguation. While humans find it easy to determine that a person named in one story is the same person referenced in a second story, machines rely heavily on crude heuristics such as string matching and stemming to make guesses as to whether nouns are coreferent. A key advantage that humans have over machines is the ability to mentally make connections between ideas and, based on these connections, reason how likely two entities are to be the same. Mirroring this natural thought process, we have created a prototype framework for disambiguating entities that is based on connectedness. In this article, we demonstrate it in the practical application of disambiguating authors across a large set of bibliographic records. By representing knowledge from the records as edges in a graph between a subject and an object, we believe that the problem of disambiguating entities reduces to the problem of discovering the most strongly connected nodes in a graph. The knowledge from the records comes in many different forms, such as names of people, date of publication, and themes extracted from the text of the abstract. These different types of knowledge are fused to create the graph required for disambiguation. Furthermore, the resulting graph and framework can be used for more complex operations.
  13. Halpin, H.; Hayes, P.J.: When owl:sameAs isn't the same : an analysis of identity links on the Semantic Web (2010) 0.01
    0.01358599 = product of:
      0.08151594 = sum of:
        0.08151594 = weight(_text_:problem in 4834) [ClassicSimilarity], result of:
          0.08151594 = score(doc=4834,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.39792046 = fieldWeight in 4834, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=4834)
      0.16666667 = coord(1/6)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in 'inter-linking' data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference. In fact, owl:sameAs can be considered just one type of 'identity link', a link that declares two items to be identical in some fashion. After reviewing the definitions and history of the problem of identity in philosophy and knowledge representation, we outline four alternative readings of owl:sameAs, showing with examples how it is being (ab)used on the Web of data. Then we present possible solutions to this problem by introducing alternative identity links that rely on named graphs.
  14. Gödert, W.: ¬Ein Ontologie basiertes Modell für Indexierung und Retrieval (2014) 0.01
    0.0128089925 = product of:
      0.07685395 = sum of:
        0.07685395 = weight(_text_:problem in 1266) [ClassicSimilarity], result of:
          0.07685395 = score(doc=1266,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 1266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=1266)
      0.16666667 = coord(1/6)
    
    Abstract
    In diesem Beitrag wird ausgehend von einem ungelösten Problem der Informationserschließung ein Modell vorgestellt, das die Methoden und Erfahrungen zur inhaltlichen Dokumenterschließung mittels kognitiv zu interpretierender Dokumentationssprachen mit den Möglichkeiten formaler Wissensrepräsentation verbindet. Die Kernkomponente des Modells besteht aus der Nutzung von Inferenzen entlang der Pfade typisierter Relationen zwischen den in Facetten geordneten Entitäten innerhalb einer Wissensrepräsentation zur Bestimmung von Treffermengen im Rahmen von Retrievalprozessen. Es werden die möglichen Konsequenzen für das Indexieren und Retrieval diskutiert.
  15. Gödert, W.: ¬An ontology-based model for indexing and retrieval (2013) 0.01
    0.0128089925 = product of:
      0.07685395 = sum of:
        0.07685395 = weight(_text_:problem in 1510) [ClassicSimilarity], result of:
          0.07685395 = score(doc=1510,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 1510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=1510)
      0.16666667 = coord(1/6)
    
    Abstract
    Starting from an unsolved problem of information retrieval this paper presents an ontology-based model for indexing and retrieval. The model combines the methods and experiences of cognitive-to-interpret indexing languages with the strengths and possibilities of formal knowledge representation. The core component of the model uses inferences along the paths of typed relations between the entities of a knowledge representation for enabling the determination of hit quantities in the context of retrieval processes. The entities are arranged in aspect-oriented facets to ensure a consistent hierarchical structure. The possible consequences for indexing and retrieval are discussed.
  16. Aitken, S.; Reid, S.: Evaluation of an ontology-based information retrieval tool (2000) 0.01
    0.0128089925 = product of:
      0.07685395 = sum of:
        0.07685395 = weight(_text_:problem in 2862) [ClassicSimilarity], result of:
          0.07685395 = score(doc=2862,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 2862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=2862)
      0.16666667 = coord(1/6)
    
    Content
    Beitrag für: Workshop on the Applications of Ontologies and Problem-Solving Methods, (eds) Gómez-Pérez, A., Benjamins, V.R., Guarino, N., and Uschold, M. European Conference on Artificial Intelligence 2000, Berlin.
  17. Hodgson, J.P.E.: Knowledge representation and language in AI (1991) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 1529) [ClassicSimilarity], result of:
          0.067929946 = score(doc=1529,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 1529, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1529)
      0.16666667 = coord(1/6)
    
    Abstract
    The aim of this book is to highlight the relationship between knowledge representation and language in artificial intelligence, and in particular on the way in which the choice of representation influences the language used to discuss a problem - and vice versa. Opening with a discussion of knowledge representation methods, and following this with a look at reasoning methods, the author begins to make his case for the intimate relationship between language and representation. He shows how each representation method fits particularly well with some reasoning methods and less so with others, using specific languages as examples. The question of representation change, an important and complex issue about which very little is known, is addressed. Dr Hodgson gathers together recent work on problem solving, showing how, in some cases, it has been possible to use representation changes to recast problems into a language that makes them easier to solve. The author maintains throughout that the relationships that this book explores lie at the heart of the construction of large systems, examining a number of the current large AI systems from the viewpoint of representation and language to prove his point.
  18. Koenderink, N.J.J.P.; Assem, M. van; Hulzebos, J.L.; Broekstra, J.; Top, J.L.: ROC: a method for proto-ontology construction by domain experts (2008) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 4647) [ClassicSimilarity], result of:
          0.067929946 = score(doc=4647,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 4647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4647)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontology construction is a labour-intensive and costly process. Even though many formal and semi-formal vocabularies are available, creating an ontology for a specific application is hindered in a number of ways. Firstly, the process of elicitating concepts is a time consuming and strenuous process. Secondly, it is difficult to keep focus. Thirdly, technical modelling constructs are hard to understand for the uninitiated. We propose ROC as a method to cope with these problems. ROC builds on well-known approaches for ontology construction. However, we reuse existing sources to generate a repository of proposed associations. ROC assists in efficiently putting forward all relevant concepts and relations by providing a large set of potential candidate associations. Secondly, rather than using intermediate representations of formal constructs we confront the domain expert with 'natural-language-like' statements generated from RDF-based triples. Moreover, we strictly separate the roles of problem owner, domain expert and knowledge engineer, each having his own responsibilities and skills. The domain expert and problem owner keep focus by monitoring a well-defined application purpose. We have implemented an initial set of tools to support ROC. This paper describes the ROC method and two application cases in which we evaluate the overall approach.
  19. Nagao, M.: Knowledge and inference (1990) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 3304) [ClassicSimilarity], result of:
          0.067929946 = score(doc=3304,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 3304, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3304)
      0.16666667 = coord(1/6)
    
    Abstract
    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of ""knowledge"" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intelligence: search and problem solving, methods of making proofs, and the use of knowledge in looking for a proof. There is also a discussion of how to use the knowledge system. The final chapter describes a popular expert system. It describes tools for building expert systems using an example based on Expert Systems-A Practical Introduction by P. Sell (Macmillian, 1985). This type of software is called an ""expert system shell."" This book was written as a textbook for undergraduate students covering only the basics but explaining as much detail as possible.
  20. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 3936) [ClassicSimilarity], result of:
          0.067929946 = score(doc=3936,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 3936, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
      0.16666667 = coord(1/6)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.

Authors

Years

Languages

  • e 81
  • d 17
  • f 1
  • pt 1
  • More… Less…

Types

  • a 75
  • el 26
  • x 11
  • m 4
  • n 1
  • r 1
  • s 1
  • More… Less…