Search (186 results, page 1 of 10)

  • × theme_ss:"Wissensrepräsentation"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.08
    0.08210229 = product of:
      0.20525573 = sum of:
        0.16565707 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
          0.16565707 = score(doc=701,freq=2.0), product of:
            0.442131 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.052150324 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.039598655 = weight(_text_:system in 701) [ClassicSimilarity], result of:
          0.039598655 = score(doc=701,freq=6.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.24108742 = fieldWeight in 701, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.08
    0.07540774 = product of:
      0.18851936 = sum of:
        0.16565707 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
          0.16565707 = score(doc=5820,freq=2.0), product of:
            0.442131 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.052150324 = queryNorm
            0.3746787 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.022862293 = weight(_text_:system in 5820) [ClassicSimilarity], result of:
          0.022862293 = score(doc=5820,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.13919188 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.4 = coord(2/5)
    
    Abstract
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.05
    0.049697123 = product of:
      0.24848561 = sum of:
        0.24848561 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
          0.24848561 = score(doc=400,freq=2.0), product of:
            0.442131 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.052150324 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.2 = coord(1/5)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  4. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.04
    0.04071675 = product of:
      0.10179187 = sum of:
        0.059397984 = weight(_text_:system in 3261) [ClassicSimilarity], result of:
          0.059397984 = score(doc=3261,freq=6.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.36163113 = fieldWeight in 3261, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.042393893 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
          0.042393893 = score(doc=3261,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.23214069 = fieldWeight in 3261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
  5. Priss, U.: Faceted information representation (2000) 0.04
    0.035787422 = product of:
      0.08946855 = sum of:
        0.040009014 = weight(_text_:system in 5095) [ClassicSimilarity], result of:
          0.040009014 = score(doc=5095,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.2435858 = fieldWeight in 5095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5095)
        0.049459543 = weight(_text_:22 in 5095) [ClassicSimilarity], result of:
          0.049459543 = score(doc=5095,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.2708308 = fieldWeight in 5095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5095)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents an abstract formalization of the notion of "facets". Facets are relational structures of units, relations and other facets selected for a certain purpose. Facets can be used to structure large knowledge representation systems into a hierarchical arrangement of consistent and independent subsystems (facets) that facilitate flexibility and combinations of different viewpoints or aspects. This paper describes the basic notions, facet characteristics and construction mechanisms. It then explicates the theory in an example of a faceted information retrieval system (FaIR)
    Date
    22. 1.2016 17:47:06
  6. Priss, U.: Faceted knowledge representation (1999) 0.04
    0.035787422 = product of:
      0.08946855 = sum of:
        0.040009014 = weight(_text_:system in 2654) [ClassicSimilarity], result of:
          0.040009014 = score(doc=2654,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.2435858 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.049459543 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
          0.049459543 = score(doc=2654,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.2708308 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
      0.4 = coord(2/5)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  7. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.03
    0.030674934 = product of:
      0.076687336 = sum of:
        0.034293443 = weight(_text_:system in 2623) [ClassicSimilarity], result of:
          0.034293443 = score(doc=2623,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.20878783 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.042393893 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
          0.042393893 = score(doc=2623,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.23214069 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
      0.4 = coord(2/5)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  8. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.03
    0.030297382 = product of:
      0.07574345 = sum of:
        0.04041521 = weight(_text_:system in 2831) [ClassicSimilarity], result of:
          0.04041521 = score(doc=2831,freq=4.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.24605882 = fieldWeight in 2831, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.035328247 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
          0.035328247 = score(doc=2831,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.19345059 = fieldWeight in 2831, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
      0.4 = coord(2/5)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  9. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.03
    0.025562445 = product of:
      0.06390611 = sum of:
        0.028577866 = weight(_text_:system in 4607) [ClassicSimilarity], result of:
          0.028577866 = score(doc=4607,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.17398985 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.035328247 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
          0.035328247 = score(doc=4607,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.19345059 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
      0.4 = coord(2/5)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  10. Mahesh, K.: Highly expressive tagging for knowledge organization in the 21st century (2014) 0.03
    0.025562445 = product of:
      0.06390611 = sum of:
        0.028577866 = weight(_text_:system in 1434) [ClassicSimilarity], result of:
          0.028577866 = score(doc=1434,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.17398985 = fieldWeight in 1434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1434)
        0.035328247 = weight(_text_:22 in 1434) [ClassicSimilarity], result of:
          0.035328247 = score(doc=1434,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.19345059 = fieldWeight in 1434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1434)
      0.4 = coord(2/5)
    
    Abstract
    Knowledge organization of large-scale content on the Web requires substantial amounts of semantic metadata that is expensive to generate manually. Recent developments in Web technologies have enabled any user to tag documents and other forms of content thereby generating metadata that could help organize knowledge. However, merely adding one or more tags to a document is highly inadequate to capture the aboutness of the document and thereby to support powerful semantic functions such as automatic classification, question answering or true semantic search and retrieval. This is true even when the tags used are labels from a well-designed classification system such as a thesaurus or taxonomy. There is a strong need to develop a semantic tagging mechanism with sufficient expressive power to capture the aboutness of each part of a document or dataset or multimedia content in order to enable applications that can benefit from knowledge organization on the Web. This article proposes a highly expressive mechanism of using ontology snippets as semantic tags that map portions of a document or a part of a dataset or a segment of a multimedia content to concepts and relations in an ontology of the domain(s) of interest.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  11. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.03
    0.025562445 = product of:
      0.06390611 = sum of:
        0.028577866 = weight(_text_:system in 2589) [ClassicSimilarity], result of:
          0.028577866 = score(doc=2589,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.17398985 = fieldWeight in 2589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.035328247 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
          0.035328247 = score(doc=2589,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.19345059 = fieldWeight in 2589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
  12. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.03
    0.025562445 = product of:
      0.06390611 = sum of:
        0.028577866 = weight(_text_:system in 4553) [ClassicSimilarity], result of:
          0.028577866 = score(doc=4553,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.17398985 = fieldWeight in 4553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.035328247 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
          0.035328247 = score(doc=4553,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.19345059 = fieldWeight in 4553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
      0.4 = coord(2/5)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  13. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.02
    0.021208167 = product of:
      0.053020418 = sum of:
        0.028290644 = weight(_text_:system in 1633) [ClassicSimilarity], result of:
          0.028290644 = score(doc=1633,freq=4.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.17224117 = fieldWeight in 1633, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.024729772 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
          0.024729772 = score(doc=1633,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.1354154 = fieldWeight in 1633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  14. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.014131299 = product of:
      0.07065649 = sum of:
        0.07065649 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
          0.07065649 = score(doc=6089,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.38690117 = fieldWeight in 6089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
      0.2 = coord(1/5)
    
    Pages
    S.11-22
  15. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.01
    0.014131299 = product of:
      0.07065649 = sum of:
        0.07065649 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
          0.07065649 = score(doc=5576,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.38690117 = fieldWeight in 5576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=5576)
      0.2 = coord(1/5)
    
    Date
    13.12.2017 14:17:22
  16. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.01
    0.014131299 = product of:
      0.07065649 = sum of:
        0.07065649 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
          0.07065649 = score(doc=539,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.38690117 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
      0.2 = coord(1/5)
    
    Date
    26.12.2011 13:22:07
  17. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.014131299 = product of:
      0.07065649 = sum of:
        0.07065649 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
          0.07065649 = score(doc=3406,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.38690117 = fieldWeight in 3406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3406)
      0.2 = coord(1/5)
    
    Date
    30. 5.2010 16:22:35
  18. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.01
    0.014131299 = product of:
      0.07065649 = sum of:
        0.07065649 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
          0.07065649 = score(doc=4523,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.38690117 = fieldWeight in 4523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=4523)
      0.2 = coord(1/5)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  19. Voß, J.: ¬Das Simple Knowledge Organisation System (SKOS) als Kodierungs- und Austauschformat der DDC für Anwendungen im Semantischen Web (2007) 0.01
    0.013717378 = product of:
      0.068586886 = sum of:
        0.068586886 = weight(_text_:system in 243) [ClassicSimilarity], result of:
          0.068586886 = score(doc=243,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.41757566 = fieldWeight in 243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.09375 = fieldNorm(doc=243)
      0.2 = coord(1/5)
    
  20. Ulrich, W.: Simple Knowledge Organisation System (2007) 0.01
    0.013717378 = product of:
      0.068586886 = sum of:
        0.068586886 = weight(_text_:system in 105) [ClassicSimilarity], result of:
          0.068586886 = score(doc=105,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.41757566 = fieldWeight in 105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.09375 = fieldNorm(doc=105)
      0.2 = coord(1/5)
    

Years

Languages

  • e 149
  • d 29
  • f 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 129
  • el 44
  • m 15
  • x 14
  • s 6
  • n 3
  • r 2
  • p 1
  • More… Less…

Subjects