Search (90 results, page 1 of 5)

  • × theme_ss:"Wissensrepräsentation"
  1. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.11
    0.114984974 = product of:
      0.22996995 = sum of:
        0.07585828 = product of:
          0.15171656 = sum of:
            0.108973406 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.108973406 = score(doc=5820,freq=2.0), product of:
                0.29084495 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0343058 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
            0.042743158 = weight(_text_:learning in 5820) [ClassicSimilarity], result of:
              0.042743158 = score(doc=5820,freq=4.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.27905482 = fieldWeight in 5820, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.5 = coord(2/4)
        0.15411167 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.15411167 = score(doc=5820,freq=4.0), product of:
            0.29084495 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0343058 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  2. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.10
    0.10216257 = product of:
      0.20432514 = sum of:
        0.040865026 = product of:
          0.1634601 = sum of:
            0.1634601 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.1634601 = score(doc=400,freq=2.0), product of:
                0.29084495 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0343058 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.25 = coord(1/4)
        0.1634601 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1634601 = score(doc=400,freq=2.0), product of:
            0.29084495 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0343058 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.07
    0.06810838 = product of:
      0.13621676 = sum of:
        0.027243352 = product of:
          0.108973406 = sum of:
            0.108973406 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.108973406 = score(doc=701,freq=2.0), product of:
                0.29084495 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0343058 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
        0.108973406 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.108973406 = score(doc=701,freq=2.0), product of:
            0.29084495 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0343058 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.05
    0.053694315 = product of:
      0.21477726 = sum of:
        0.21477726 = sum of:
          0.16829763 = weight(_text_:lernen in 4523) [ClassicSimilarity], result of:
            0.16829763 = score(doc=4523,freq=4.0), product of:
              0.19222628 = queryWeight, product of:
                5.6033173 = idf(docFreq=442, maxDocs=44218)
                0.0343058 = queryNorm
              0.8755183 = fieldWeight in 4523, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.6033173 = idf(docFreq=442, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
          0.046479624 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
            0.046479624 = score(doc=4523,freq=2.0), product of:
              0.120133065 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0343058 = queryNorm
              0.38690117 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
      0.25 = coord(1/4)
    
    Content
    Vgl. auch den Beitrag: Sokol, J.: Spielend lernen. In: Spektrum der Wissenschaft. 2018, H.11, S.72-76.
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  5. Harbig, D.; Schneider, R.: Ontology Learning im Rahmen von MyShelf (2006) 0.03
    0.030175835 = product of:
      0.06035167 = sum of:
        0.01870013 = product of:
          0.07480052 = sum of:
            0.07480052 = weight(_text_:learning in 5781) [ClassicSimilarity], result of:
              0.07480052 = score(doc=5781,freq=4.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.48834592 = fieldWeight in 5781, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5781)
          0.25 = coord(1/4)
        0.04165154 = product of:
          0.08330308 = sum of:
            0.08330308 = weight(_text_:lernen in 5781) [ClassicSimilarity], result of:
              0.08330308 = score(doc=5781,freq=2.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.43335947 = fieldWeight in 5781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5781)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Der vorliegende Artikel befasst sich mit maschinellem Lernen von Ontologien. Es werden verschiedene Ansätze zum Ontology Learning vorgestellt. Der Fokus liegt auf dem Einsatz maschineller Lernalgorithmen zum automatischen Erwerb von Ontologien für das virtuelle Bibliotheksregal MyShelf. Dieses bietet Benutzern bei der Recherche durch Ontology Switching einen flexibleren Zugang zu Informationsbeständen. Basierend auf Textkorpora wurden Lerntechniken angewandt, um deren Potential für die Erstellung von Ontologien zu überprüfen.
  6. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.01
    0.012488572 = product of:
      0.024977144 = sum of:
        0.013357237 = product of:
          0.053428948 = sum of:
            0.053428948 = weight(_text_:learning in 2645) [ClassicSimilarity], result of:
              0.053428948 = score(doc=2645,freq=4.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.34881854 = fieldWeight in 2645, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2645)
          0.25 = coord(1/4)
        0.011619906 = product of:
          0.023239812 = sum of:
            0.023239812 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
              0.023239812 = score(doc=2645,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.19345059 = fieldWeight in 2645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2645)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
  7. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.012488572 = product of:
      0.024977144 = sum of:
        0.013357237 = product of:
          0.053428948 = sum of:
            0.053428948 = weight(_text_:learning in 4553) [ClassicSimilarity], result of:
              0.053428948 = score(doc=4553,freq=4.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.34881854 = fieldWeight in 4553, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.25 = coord(1/4)
        0.011619906 = product of:
          0.023239812 = sum of:
            0.023239812 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.023239812 = score(doc=4553,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  8. Bauckhage, C.: Moderne Textanalyse : neues Wissen für intelligente Lösungen (2016) 0.01
    0.011900439 = product of:
      0.047601756 = sum of:
        0.047601756 = product of:
          0.09520351 = sum of:
            0.09520351 = weight(_text_:lernen in 2568) [ClassicSimilarity], result of:
              0.09520351 = score(doc=2568,freq=2.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.49526796 = fieldWeight in 2568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2568)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Im Zuge der immer größeren Verfügbarkeit von Daten (Big Data) und rasanter Fortschritte im Daten-basierten maschinellen Lernen haben wir in den letzten Jahren Durchbrüche in der künstlichen Intelligenz erlebt. Dieser Vortrag beleuchtet diese Entwicklungen insbesondere im Hinblick auf die automatische Analyse von Textdaten. Anhand einfacher Beispiele illustrieren wir, wie moderne Textanalyse abläuft und zeigen wiederum anhand von Beispielen, welche praktischen Anwendungsmöglichkeiten sich heutzutage in Branchen wie dem Verlagswesen, der Finanzindustrie oder dem Consulting ergeben.
  9. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.01
    0.010532449 = product of:
      0.021064898 = sum of:
        0.009444992 = product of:
          0.03777997 = sum of:
            0.03777997 = weight(_text_:learning in 4607) [ClassicSimilarity], result of:
              0.03777997 = score(doc=4607,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.24665193 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.25 = coord(1/4)
        0.011619906 = product of:
          0.023239812 = sum of:
            0.023239812 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.023239812 = score(doc=4607,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  10. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.01
    0.010412885 = product of:
      0.04165154 = sum of:
        0.04165154 = product of:
          0.08330308 = sum of:
            0.08330308 = weight(_text_:lernen in 4329) [ClassicSimilarity], result of:
              0.08330308 = score(doc=4329,freq=2.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.43335947 = fieldWeight in 4329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4329)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Lernen - Wissen - Adaption : workshop proceedings / LWA 2007, Halle, September 2007. Martin Luther University Halle-Wittenberg, Institute for Informatics, Databases and Information Systems. Hrsg.: Alexander Hinneburg
  11. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.01
    0.008425959 = product of:
      0.016851919 = sum of:
        0.0075559937 = product of:
          0.030223975 = sum of:
            0.030223975 = weight(_text_:learning in 179) [ClassicSimilarity], result of:
              0.030223975 = score(doc=179,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.19732155 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.25 = coord(1/4)
        0.009295925 = product of:
          0.01859185 = sum of:
            0.01859185 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.01859185 = score(doc=179,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  12. Beierle, C.; Kern-Isberner, G.: Methoden wissensbasierter Systeme : Grundlagen, Algorithmen, Anwendungen (2008) 0.01
    0.008414881 = product of:
      0.033659525 = sum of:
        0.033659525 = product of:
          0.06731905 = sum of:
            0.06731905 = weight(_text_:lernen in 4622) [ClassicSimilarity], result of:
              0.06731905 = score(doc=4622,freq=4.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.35020733 = fieldWeight in 4622, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4622)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Dieses Buch präsentiert ein breites Spektrum aktueller Methoden zur Repräsentation und Verarbeitung (un)sicheren Wissens in maschinellen Systemen in didaktisch aufbereiteter Form. Neben symbolischen Ansätzen des nichtmonotonen Schließens (Default-Logik, hier konstruktiv und leicht verständlich mittels sog. Default-Bäume realisiert) werden auch ausführlich quantitative Methoden wie z.B. probabilistische Markov- und Bayes-Netze vorgestellt. Weitere Abschnitte beschäftigen sich mit Wissensdynamik (Truth Maintenance-Systeme), Aktionen und Planen, maschinellem Lernen, Data Mining und fallbasiertem Schließen.In einem vertieften Querschnitt werden zentrale alternative Ansätze einer logikbasierten Wissensmodellierung ausführlich behandelt. Detailliert beschriebene Algorithmen geben dem Praktiker nützliche Hinweise zur Anwendung der vorgestellten Ansätze an die Hand, während fundiertes Hintergrundwissen ein tieferes Verständnis für die Besonderheiten der einzelnen Methoden vermittelt . Mit einer weitgehend vollständigen Darstellung des Stoffes und zahlreichen, in den Text integrierten Aufgaben ist das Buch für ein Selbststudium konzipiert, eignet sich aber gleichermaßen für eine entsprechende Vorlesung. Im Online-Service zu diesem Buch werden u.a. ausführliche Lösungshinweise zu allen Aufgaben des Buches angeboten.Zahlreiche Beispiele mit medizinischem, biologischem, wirtschaftlichem und technischem Hintergrund illustrieren konkrete Anwendungsszenarien. Von namhaften Professoren empfohlen: State-of-the-Art bietet das Buch zu diesem klassischen Bereich der Informatik. Die wesentlichen Methoden wissensbasierter Systeme werden verständlich und anschaulich dargestellt. Repräsentation und Verarbeitung sicheren und unsicheren Wissens in maschinellen Systemen stehen dabei im Mittelpunkt. In der vierten, verbesserten Auflage wurde die Anzahl der motivierenden Selbsttestaufgaben mit aktuellem Praxisbezug nochmals erweitert. Ein Online-Service mit ausführlichen Musterlösungen erleichtert das Lernen.
  13. Xu, Y.; Li, G.; Mou, L.; Lu, Y.: Learning non-taxonomic relations on demand for ontology extension (2014) 0.01
    0.0063358936 = product of:
      0.025343575 = sum of:
        0.025343575 = product of:
          0.1013743 = sum of:
            0.1013743 = weight(_text_:learning in 2961) [ClassicSimilarity], result of:
              0.1013743 = score(doc=2961,freq=10.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.6618366 = fieldWeight in 2961, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2961)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Learning non-taxonomic relations becomes an important research topic in ontology extension. Most of the existing learning approaches are mainly based on expert crafted corpora. These approaches are normally domain-specific and the corpora acquisition is laborious and costly. On the other hand, based on the static corpora, it is not able to meet personalized needs of semantic relations discovery for various taxonomies. In this paper, we propose a novel approach for learning non-taxonomic relations on demand. For any supplied taxonomy, it can focus on the segment of the taxonomy and collect information dynamically about the taxonomic concepts by using Wikipedia as a learning source. Based on the newly generated corpus, non-taxonomic relations are acquired through three steps: a) semantic relatedness detection; b) relations extraction between concepts; and c) relations generalization within a hierarchy. The proposed approach is evaluated on three different predefined taxonomies and the experimental results show that it is effective in capturing non-taxonomic relations as needed and has good potential for the ontology extension on demand.
  14. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.01
    0.0062472755 = product of:
      0.024989102 = sum of:
        0.024989102 = product of:
          0.09995641 = sum of:
            0.09995641 = weight(_text_:learning in 5732) [ClassicSimilarity], result of:
              0.09995641 = score(doc=5732,freq=14.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.6525797 = fieldWeight in 5732, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5732)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  15. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.005809953 = product of:
      0.023239812 = sum of:
        0.023239812 = product of:
          0.046479624 = sum of:
            0.046479624 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.046479624 = score(doc=6089,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    S.11-22
  16. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.01
    0.005809953 = product of:
      0.023239812 = sum of:
        0.023239812 = product of:
          0.046479624 = sum of:
            0.046479624 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.046479624 = score(doc=5576,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    13.12.2017 14:17:22
  17. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.01
    0.005809953 = product of:
      0.023239812 = sum of:
        0.023239812 = product of:
          0.046479624 = sum of:
            0.046479624 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.046479624 = score(doc=539,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26.12.2011 13:22:07
  18. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.005809953 = product of:
      0.023239812 = sum of:
        0.023239812 = product of:
          0.046479624 = sum of:
            0.046479624 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.046479624 = score(doc=3406,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 5.2010 16:22:35
  19. Wong, W.; Liu, W.; Bennamoun, M.: Ontology learning from text : a look back and into the future (2010) 0.01
    0.0057257228 = product of:
      0.022902891 = sum of:
        0.022902891 = product of:
          0.091611564 = sum of:
            0.091611564 = weight(_text_:learning in 4733) [ClassicSimilarity], result of:
              0.091611564 = score(doc=4733,freq=6.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.59809923 = fieldWeight in 4733, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4733)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Ontologies are often viewed as the answer to the need for inter-operable semantics in modern information systems. The explosion of textual information on the "Read/Write" Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas such as natural language processing have fuelled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium, and discusses the remaining challenges that will define the research directions in this area in the near future.
  20. El idrissi esserhrouchni, O. et al.; Frikh, B.; Ouhbi, B.: OntologyLine : a new framework for learning non-taxonomic relations of domain ontology (2016) 0.01
    0.0056669954 = product of:
      0.022667982 = sum of:
        0.022667982 = product of:
          0.09067193 = sum of:
            0.09067193 = weight(_text_:learning in 3379) [ClassicSimilarity], result of:
              0.09067193 = score(doc=3379,freq=8.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.59196466 = fieldWeight in 3379, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3379)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Domain Ontology learning has been introduced as a technology that aims at reducing the bottleneck of knowledge acquisition in the construction of domain ontologies. However, the discovery and the labelling of non-taxonomic relations have been identified as one of the most difficult problems in this learning process. In this paper, we propose OntologyLine, a new system for discovering non-taxonomic relations and building domain ontology from scratch. The proposed system is based on adapting Open Information Extraction algorithms to extract and label relations between domain concepts. OntologyLine was tested in two different domains: the financial and cancer domains. It was evaluated against gold standard ontology and was compared to state-of-the-art ontology learning algorithm. The experimental results show that OntologyLine is more effective for acquiring non-taxonomic relations and gives better results in terms of precision, recall and F-measure.

Authors

Years

Languages

  • e 74
  • d 16

Types

  • a 65
  • el 21
  • x 7
  • m 6
  • s 2
  • n 1
  • p 1
  • r 1
  • More… Less…