Search (296 results, page 1 of 15)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.45
    0.4524573 = product of:
      0.6334402 = sum of:
        0.06184506 = product of:
          0.18553518 = sum of:
            0.18553518 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18553518 = score(doc=400,freq=2.0), product of:
                0.3301232 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038938753 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18553518 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18553518 = score(doc=400,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.18553518 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18553518 = score(doc=400,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.014989593 = weight(_text_:with in 400) [ClassicSimilarity], result of:
          0.014989593 = score(doc=400,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15974675 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.18553518 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18553518 = score(doc=400,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.71428573 = coord(5/7)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.43
    0.42570144 = product of:
      0.595982 = sum of:
        0.041230045 = product of:
          0.12369013 = sum of:
            0.12369013 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12369013 = score(doc=5820,freq=2.0), product of:
                0.3301232 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038938753 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17492425 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17492425 = score(doc=5820,freq=4.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.17492425 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17492425 = score(doc=5820,freq=4.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.029979186 = weight(_text_:with in 5820) [ClassicSimilarity], result of:
          0.029979186 = score(doc=5820,freq=18.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.3194935 = fieldWeight in 5820, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.17492425 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17492425 = score(doc=5820,freq=4.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.71428573 = coord(5/7)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.30
    0.30459484 = product of:
      0.42643276 = sum of:
        0.041230045 = product of:
          0.12369013 = sum of:
            0.12369013 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12369013 = score(doc=701,freq=2.0), product of:
                0.3301232 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038938753 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12369013 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12369013 = score(doc=701,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12369013 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12369013 = score(doc=701,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.014132325 = weight(_text_:with in 701) [ClassicSimilarity], result of:
          0.014132325 = score(doc=701,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15061069 = fieldWeight in 701, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12369013 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12369013 = score(doc=701,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.71428573 = coord(5/7)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Meng, K.; Ba, Z.; Ma, Y.; Li, G.: ¬A network coupling approach to detecting hierarchical linkages between science and technology (2024) 0.04
    0.042336486 = product of:
      0.1481777 = sum of:
        0.12697922 = weight(_text_:interactions in 1205) [ClassicSimilarity], result of:
          0.12697922 = score(doc=1205,freq=4.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.55291826 = fieldWeight in 1205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
        0.021198487 = weight(_text_:with in 1205) [ClassicSimilarity], result of:
          0.021198487 = score(doc=1205,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.22591603 = fieldWeight in 1205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
      0.2857143 = coord(2/7)
    
    Abstract
    Detecting science-technology hierarchical linkages is beneficial for understanding deep interactions between science and technology (S&T). Previous studies have mainly focused on linear linkages between S&T but ignored their structural linkages. In this paper, we propose a network coupling approach to inspect hierarchical interactions of S&T by integrating their knowledge linkages and structural linkages. S&T knowledge networks are first enhanced with bidirectional encoder representation from transformers (BERT) knowledge alignment, and then their hierarchical structures are identified based on K-core decomposition. Hierarchical coupling preferences and strengths of the S&T networks over time are further calculated based on similarities of coupling nodes' degree distribution and similarities of coupling edges' weight distribution. Extensive experimental results indicate that our approach is feasible and robust in identifying the coupling hierarchy with superior performance compared to other isomorphism and dissimilarity algorithms. Our research extends the mindset of S&T linkage measurement by identifying patterns and paths of the interaction of S&T hierarchical knowledge.
  5. Halpin, H.; Hayes, P.J.: When owl:sameAs isn't the same : an analysis of identity links on the Semantic Web (2010) 0.03
    0.033071604 = product of:
      0.1157506 = sum of:
        0.08978786 = weight(_text_:interactions in 4834) [ClassicSimilarity], result of:
          0.08978786 = score(doc=4834,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.39097226 = fieldWeight in 4834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.046875 = fieldNorm(doc=4834)
        0.025962738 = weight(_text_:with in 4834) [ClassicSimilarity], result of:
          0.025962738 = score(doc=4834,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2766895 = fieldWeight in 4834, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=4834)
      0.2857143 = coord(2/7)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in 'inter-linking' data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference. In fact, owl:sameAs can be considered just one type of 'identity link', a link that declares two items to be identical in some fashion. After reviewing the definitions and history of the problem of identity in philosophy and knowledge representation, we outline four alternative readings of owl:sameAs, showing with examples how it is being (ab)used on the Web of data. Then we present possible solutions to this problem by introducing alternative identity links that rely on named graphs.
  6. Halpin, H.; Hayes, P.J.; McCusker, J.P.; McGuinness, D.L.; Thompson, H.S.: When owl:sameAs isn't the same : an analysis of identity in linked data (2010) 0.03
    0.031710386 = product of:
      0.11098635 = sum of:
        0.08978786 = weight(_text_:interactions in 4703) [ClassicSimilarity], result of:
          0.08978786 = score(doc=4703,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.39097226 = fieldWeight in 4703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.046875 = fieldNorm(doc=4703)
        0.021198487 = weight(_text_:with in 4703) [ClassicSimilarity], result of:
          0.021198487 = score(doc=4703,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.22591603 = fieldWeight in 4703, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=4703)
      0.2857143 = coord(2/7)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in interlinking data-sets. There is however, ongoing discussion about its use, and potential misuse, particularly with regards to interactions with inference. In fact, owl:sameAs can be viewed as encoding only one point on a scale of similarity, one that is often too strong for many of its current uses. We describe how referentially opaque contexts that do not allow inference exist, and then outline some varieties of referentially-opaque alternatives to owl:sameAs. Finally, we report on an empirical experiment over randomly selected owl:sameAs statements from the Web of data. This theoretical apparatus and experiment shed light upon how owl:sameAs is being used (and misused) on the Web of data.
  7. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.03
    0.030465176 = product of:
      0.10662811 = sum of:
        0.02826465 = weight(_text_:with in 3671) [ClassicSimilarity], result of:
          0.02826465 = score(doc=3671,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.30122137 = fieldWeight in 3671, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0625 = fieldNorm(doc=3671)
        0.07836346 = product of:
          0.15672693 = sum of:
            0.15672693 = weight(_text_:humans in 3671) [ClassicSimilarity], result of:
              0.15672693 = score(doc=3671,freq=2.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.5964558 = fieldWeight in 3671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Semantic networks produced from human data have statistical properties that cannot be easily captured by spatial representations. We explore a probabilistic approach to semantic representation that explicitly models the probability with which words occurin diffrent contexts, and hence captures the probabilistic relationships between words. We show that this representation has statistical properties consistent with the large-scale structure of semantic networks constructed by humans, and trace the origins of these properties.
  8. Giunchiglia, F.; Zaihrayeu, I.; Farazi, F.: Converting classifications into OWL ontologies (2009) 0.03
    0.029804429 = product of:
      0.1043155 = sum of:
        0.021198487 = weight(_text_:with in 4690) [ClassicSimilarity], result of:
          0.021198487 = score(doc=4690,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.22591603 = fieldWeight in 4690, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=4690)
        0.08311701 = product of:
          0.16623402 = sum of:
            0.16623402 = weight(_text_:humans in 4690) [ClassicSimilarity], result of:
              0.16623402 = score(doc=4690,freq=4.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.63263696 = fieldWeight in 4690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4690)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Classification schemes, such as the DMoZ web directory, provide a convenient and intuitive way for humans to access classified contents. While being easy to be dealt with for humans, classification schemes remain hard to be reasoned about by automated software agents. Among other things, this hardness is conditioned by the ambiguous na- ture of the natural language used to describe classification categories. In this paper we describe how classification schemes can be converted into OWL ontologies, thus enabling reasoning on them by Semantic Web applications. The proposed solution is based on a two phase approach in which category names are first encoded in a concept language and then, together with the structure of the classification scheme, are converted into an OWL ontology. We demonstrate the practical applicability of our approach by showing how the results of reasoning on these OWL ontologies can help improve the organization and use of web directories.
  9. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.03
    0.025146393 = product of:
      0.088012375 = sum of:
        0.074823216 = weight(_text_:interactions in 106) [ClassicSimilarity], result of:
          0.074823216 = score(doc=106,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.3258102 = fieldWeight in 106, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.013189158 = product of:
          0.026378317 = sum of:
            0.026378317 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.026378317 = score(doc=106,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  10. Sy, M.-F.; Ranwez, S.; Montmain, J.; Ragnault, A.; Crampes, M.; Ranwez, V.: User centered and ontology based information retrieval system for life sciences (2012) 0.02
    0.021140257 = product of:
      0.0739909 = sum of:
        0.059858575 = weight(_text_:interactions in 699) [ClassicSimilarity], result of:
          0.059858575 = score(doc=699,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.26064816 = fieldWeight in 699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.03125 = fieldNorm(doc=699)
        0.014132325 = weight(_text_:with in 699) [ClassicSimilarity], result of:
          0.014132325 = score(doc=699,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15061069 = fieldWeight in 699, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.03125 = fieldNorm(doc=699)
      0.2857143 = coord(2/7)
    
    Abstract
    Background: Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. Results: This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. Conclusions: The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help.
  11. OWL Web Ontology Language Overview (2004) 0.02
    0.021074913 = product of:
      0.07376219 = sum of:
        0.014989593 = weight(_text_:with in 4682) [ClassicSimilarity], result of:
          0.014989593 = score(doc=4682,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15974675 = fieldWeight in 4682, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=4682)
        0.058772597 = product of:
          0.117545195 = sum of:
            0.117545195 = weight(_text_:humans in 4682) [ClassicSimilarity], result of:
              0.117545195 = score(doc=4682,freq=2.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.44734186 = fieldWeight in 4682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4682)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The OWL Web Ontology Language is designed for use by applications that need to process the content of information instead of just presenting information to humans. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. OWL has three increasingly-expressive sublanguages: OWL Lite, OWL DL, and OWL Full. This document is written for readers who want a first impression of the capabilities of OWL. It provides an introduction to OWL by informally describing the features of each of the sublanguages of OWL. Some knowledge of RDF Schema is useful for understanding this document, but not essential. After this document, interested readers may turn to the OWL Guide for more detailed descriptions and extensive examples on the features of OWL. The normative formal definition of OWL can be found in the OWL Semantics and Abstract Syntax.
  12. Auer, S.; Oelen, A.; Haris, A.M.; Stocker, M.; D'Souza, J.; Farfar, K.E.; Vogt, L.; Prinz, M.; Wiens, V.; Jaradeh, M.Y.: Improving access to scientific literature with knowledge graphs : an experiment using library guidelines to judge information integrity (2020) 0.02
    0.02017508 = product of:
      0.07061278 = sum of:
        0.021635616 = weight(_text_:with in 316) [ClassicSimilarity], result of:
          0.021635616 = score(doc=316,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 316, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=316)
        0.048977163 = product of:
          0.097954325 = sum of:
            0.097954325 = weight(_text_:humans in 316) [ClassicSimilarity], result of:
              0.097954325 = score(doc=316,freq=2.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.37278488 = fieldWeight in 316, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=316)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The transfer of knowledge has not changed fundamentally for many hundreds of years: It is usually document-based-formerly printed on paper as a classic essay and nowadays as PDF. With around 2.5 million new research contributions every year, researchers drown in a flood of pseudo-digitized PDF publications. As a result research is seriously weakened. In this article, we argue for representing scholarly contributions in a structured and semantic way as a knowledge graph. The advantage is that information represented in a knowledge graph is readable by machines and humans. As an example, we give an overview on the Open Research Knowledge Graph (ORKG), a service implementing this approach. For creating the knowledge graph representation, we rely on a mixture of manual (crowd/expert sourcing) and (semi-)automated techniques. Only with such a combination of human and machine intelligence, we can achieve the required quality of the representation to allow for novel exploration and assistance services for researchers. As a result, a scholarly knowledge graph such as the ORKG can be used to give a condensed overview on the state-of-the-art addressing a particular research quest, for example as a tabular comparison of contributions according to various characteristics of the approaches. Further possible intuitive access interfaces to such scholarly knowledge graphs include domain-specific (chart) visualizations or answering of natural language questions.
  13. Aker, A.; Plaza, L.; Lloret, E.; Gaizauskas, R.: Do humans have conceptual models about geographic objects? : a user study (2013) 0.01
    0.012118707 = product of:
      0.08483095 = sum of:
        0.08483095 = product of:
          0.1696619 = sum of:
            0.1696619 = weight(_text_:humans in 680) [ClassicSimilarity], result of:
              0.1696619 = score(doc=680,freq=6.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.64568245 = fieldWeight in 680, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=680)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article, we investigate what sorts of information humans request about geographical objects of the same type. For example, Edinburgh Castle and Bodiam Castle are two objects of the same type: "castle." The question is whether specific information is requested for the object type "castle" and how this information differs for objects of other types (e.g., church, museum, or lake). We aim to answer this question using an online survey. In the survey, we showed 184 participants 200 images pertaining to urban and rural objects and asked them to write questions for which they would like to know the answers when seeing those objects. Our analysis of the 6,169 questions collected in the survey shows that humans have shared ideas of what to ask about geographical objects. When the object types resemble each other (e.g., church and temple), the requested information is similar for the objects of these types. Otherwise, the information is specific to an object type. Our results may be very useful in guiding Natural Language Processing tasks involving automatic generation of templates for image descriptions and their assessment, as well as image indexing and organization.
  14. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.01
    0.011939922 = product of:
      0.041789725 = sum of:
        0.025962738 = weight(_text_:with in 2024) [ClassicSimilarity], result of:
          0.025962738 = score(doc=2024,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2766895 = fieldWeight in 2024, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=2024)
        0.015826989 = product of:
          0.031653978 = sum of:
            0.031653978 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
              0.031653978 = score(doc=2024,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.23214069 = fieldWeight in 2024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  15. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.011739651 = product of:
      0.041088775 = sum of:
        0.019986123 = weight(_text_:with in 3376) [ClassicSimilarity], result of:
          0.019986123 = score(doc=3376,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.21299566 = fieldWeight in 3376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0625 = fieldNorm(doc=3376)
        0.021102654 = product of:
          0.042205308 = sum of:
            0.042205308 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.042205308 = score(doc=3376,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This chapter presents ontologies and their role in the creation of the Semantic Web. Ontologies hold special interest, because they are very closely related to the way we understand the world. They provide common understanding, the very first step to successful communication. In following sections, we will present ontologies, how they are created and used. We will describe available tools for specifying and working with ontologies.
    Date
    31. 7.2010 16:58:22
  16. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.01
    0.011739651 = product of:
      0.041088775 = sum of:
        0.019986123 = weight(_text_:with in 4476) [ClassicSimilarity], result of:
          0.019986123 = score(doc=4476,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.21299566 = fieldWeight in 4476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.021102654 = product of:
          0.042205308 = sum of:
            0.042205308 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.042205308 = score(doc=4476,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Describes the types of representations used in different theories of abstractions. Shows how the type of mapping between these representations has been increasingly generalised. Discusses desirable properties preserved by such mappings and identifies how these properties are influenced by the mappings and the presentations defined. Surveys programs made in understanding the complexity reduction associated with abstraction. Focuses on formal models of how abstraction reduces the search space. Presents some of the systems that implement abstraction. shows how the efforts in this area have focused on the mechanisation of languages for the declarative representation of abstraction.
    Date
    1.10.2018 14:13:22
  17. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.01
    0.010570128 = product of:
      0.03699545 = sum of:
        0.029929288 = weight(_text_:interactions in 4472) [ClassicSimilarity], result of:
          0.029929288 = score(doc=4472,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.13032408 = fieldWeight in 4472, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.0070661623 = weight(_text_:with in 4472) [ClassicSimilarity], result of:
          0.0070661623 = score(doc=4472,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.07530534 = fieldWeight in 4472, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
      0.2857143 = coord(2/7)
    
    Abstract
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
  18. Priss, U.: Faceted information representation (2000) 0.01
    0.010272195 = product of:
      0.03595268 = sum of:
        0.017487857 = weight(_text_:with in 5095) [ClassicSimilarity], result of:
          0.017487857 = score(doc=5095,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.1863712 = fieldWeight in 5095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5095)
        0.018464822 = product of:
          0.036929645 = sum of:
            0.036929645 = weight(_text_:22 in 5095) [ClassicSimilarity], result of:
              0.036929645 = score(doc=5095,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.2708308 = fieldWeight in 5095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    22. 1.2016 17:47:06
    Source
    Working with conceptual structures: contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000. Ed.: G. Stumme
  19. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.01
    0.009949936 = product of:
      0.034824774 = sum of:
        0.021635616 = weight(_text_:with in 3403) [ClassicSimilarity], result of:
          0.021635616 = score(doc=3403,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 3403, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3403)
        0.013189158 = product of:
          0.026378317 = sum of:
            0.026378317 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
              0.026378317 = score(doc=3403,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.19345059 = fieldWeight in 3403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    What is knowledge representation and reasoning (KR&R)? Alas, a thorough account would require a book, or at least a dedicated, full-length paper, but here we shall have to make do with something simpler. Since most readers are likely to have an intuitive grasp of the essence of KR&R, our simple account should suffice. The interesting thing is that this simple account itself makes reference to some of the foundational distinctions in the field of philosophy. These distinctions also play a central role in artificial intelligence (AI) and computer science. To begin with, the first distinction in KR&R is that we identify knowledge with knowledge that such-and-such holds (possibly to a degree), rather than knowing how. If you ask an expert tennis player how he manages to serve a ball at 130 miles per hour on his first serve, and then serve a safer, topspin serve on his second should the first be out, you may well receive a confession that, if truth be told, this athlete can't really tell you. He just does it; he does something he has been doing since his youth. Yet, there is no denying that he knows how to serve. In contrast, the knowledge in KR&R must be expressible in declarative statements. For example, our tennis player knows that if his first serve lands outside the service box, it's not in play. He thus knows a proposition, conditional in form.
    Date
    9. 2.2017 19:22:14
  20. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.009949936 = product of:
      0.034824774 = sum of:
        0.021635616 = weight(_text_:with in 4553) [ClassicSimilarity], result of:
          0.021635616 = score(doc=4553,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 4553, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.013189158 = product of:
          0.026378317 = sum of:
            0.026378317 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.026378317 = score(doc=4553,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01

Years

Languages

  • e 277
  • d 12
  • pt 2
  • f 1
  • More… Less…

Types

  • a 212
  • el 98
  • m 16
  • x 15
  • n 7
  • s 4
  • p 3
  • r 2
  • A 1
  • EL 1
  • More… Less…

Subjects