Search (87 results, page 1 of 5)

  • × theme_ss:"Wissensrepräsentation"
  1. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.09
    0.08653661 = sum of:
      0.054482006 = product of:
        0.16344601 = sum of:
          0.16344601 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.16344601 = score(doc=5820,freq=2.0), product of:
              0.4362298 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05145426 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
      0.032054603 = product of:
        0.064109206 = sum of:
          0.064109206 = weight(_text_:learning in 5820) [ClassicSimilarity], result of:
            0.064109206 = score(doc=5820,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.27905482 = fieldWeight in 5820, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.5 = coord(1/2)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  2. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.06
    0.057496607 = product of:
      0.114993215 = sum of:
        0.114993215 = sum of:
          0.08013651 = weight(_text_:learning in 2645) [ClassicSimilarity], result of:
            0.08013651 = score(doc=2645,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.34881854 = fieldWeight in 2645, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.03485671 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
            0.03485671 = score(doc=2645,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
      0.5 = coord(1/2)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
  3. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.06
    0.057496607 = product of:
      0.114993215 = sum of:
        0.114993215 = sum of:
          0.08013651 = weight(_text_:learning in 4553) [ClassicSimilarity], result of:
            0.08013651 = score(doc=4553,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.34881854 = fieldWeight in 4553, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.03485671 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.03485671 = score(doc=4553,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.5 = coord(1/2)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  4. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.05
    0.045760885 = product of:
      0.09152177 = sum of:
        0.09152177 = sum of:
          0.056665063 = weight(_text_:learning in 4607) [ClassicSimilarity], result of:
            0.056665063 = score(doc=4607,freq=2.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.24665193 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
          0.03485671 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
            0.03485671 = score(doc=4607,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
      0.5 = coord(1/2)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  5. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.04
    0.040861502 = product of:
      0.081723005 = sum of:
        0.081723005 = product of:
          0.24516901 = sum of:
            0.24516901 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.24516901 = score(doc=400,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  6. Xu, Y.; Li, G.; Mou, L.; Lu, Y.: Learning non-taxonomic relations on demand for ontology extension (2014) 0.04
    0.038012084 = product of:
      0.07602417 = sum of:
        0.07602417 = product of:
          0.15204833 = sum of:
            0.15204833 = weight(_text_:learning in 2961) [ClassicSimilarity], result of:
              0.15204833 = score(doc=2961,freq=10.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.6618366 = fieldWeight in 2961, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2961)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Learning non-taxonomic relations becomes an important research topic in ontology extension. Most of the existing learning approaches are mainly based on expert crafted corpora. These approaches are normally domain-specific and the corpora acquisition is laborious and costly. On the other hand, based on the static corpora, it is not able to meet personalized needs of semantic relations discovery for various taxonomies. In this paper, we propose a novel approach for learning non-taxonomic relations on demand. For any supplied taxonomy, it can focus on the segment of the taxonomy and collect information dynamically about the taxonomic concepts by using Wikipedia as a learning source. Based on the newly generated corpus, non-taxonomic relations are acquired through three steps: a) semantic relatedness detection; b) relations extraction between concepts; and c) relations generalization within a hierarchy. The proposed approach is evaluated on three different predefined taxonomies and the experimental results show that it is effective in capturing non-taxonomic relations as needed and has good potential for the ontology extension on demand.
  7. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.04
    0.03748042 = product of:
      0.07496084 = sum of:
        0.07496084 = product of:
          0.14992169 = sum of:
            0.14992169 = weight(_text_:learning in 5732) [ClassicSimilarity], result of:
              0.14992169 = score(doc=5732,freq=14.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.6525797 = fieldWeight in 5732, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5732)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  8. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.04
    0.03660871 = product of:
      0.07321742 = sum of:
        0.07321742 = sum of:
          0.04533205 = weight(_text_:learning in 179) [ClassicSimilarity], result of:
            0.04533205 = score(doc=179,freq=2.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.19732155 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.027885368 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.027885368 = score(doc=179,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  9. Wong, W.; Liu, W.; Bennamoun, M.: Ontology learning from text : a look back and into the future (2010) 0.03
    0.034351375 = product of:
      0.06870275 = sum of:
        0.06870275 = product of:
          0.1374055 = sum of:
            0.1374055 = weight(_text_:learning in 4733) [ClassicSimilarity], result of:
              0.1374055 = score(doc=4733,freq=6.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.59809923 = fieldWeight in 4733, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4733)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are often viewed as the answer to the need for inter-operable semantics in modern information systems. The explosion of textual information on the "Read/Write" Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas such as natural language processing have fuelled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium, and discusses the remaining challenges that will define the research directions in this area in the near future.
  10. El idrissi esserhrouchni, O. et al.; Frikh, B.; Ouhbi, B.: OntologyLine : a new framework for learning non-taxonomic relations of domain ontology (2016) 0.03
    0.03399904 = product of:
      0.06799808 = sum of:
        0.06799808 = product of:
          0.13599616 = sum of:
            0.13599616 = weight(_text_:learning in 3379) [ClassicSimilarity], result of:
              0.13599616 = score(doc=3379,freq=8.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.59196466 = fieldWeight in 3379, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3379)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Domain Ontology learning has been introduced as a technology that aims at reducing the bottleneck of knowledge acquisition in the construction of domain ontologies. However, the discovery and the labelling of non-taxonomic relations have been identified as one of the most difficult problems in this learning process. In this paper, we propose OntologyLine, a new system for discovering non-taxonomic relations and building domain ontology from scratch. The proposed system is based on adapting Open Information Extraction algorithms to extract and label relations between domain concepts. OntologyLine was tested in two different domains: the financial and cancer domains. It was evaluated against gold standard ontology and was compared to state-of-the-art ontology learning algorithm. The experimental results show that OntologyLine is more effective for acquiring non-taxonomic relations and gives better results in terms of precision, recall and F-measure.
  11. Cimiano, P.; Völker, J.; Studer, R.: Ontologies on demand? : a description of the state-of-the-art, applications, challenges and trends for ontology learning from text (2006) 0.03
    0.029444033 = product of:
      0.058888067 = sum of:
        0.058888067 = product of:
          0.11777613 = sum of:
            0.11777613 = weight(_text_:learning in 6014) [ClassicSimilarity], result of:
              0.11777613 = score(doc=6014,freq=6.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.51265645 = fieldWeight in 6014, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6014)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are nowadays used for many applications requiring data, services and resources in general to be interoperable and machine understandable. Such applications are for example web service discovery and composition, information integration across databases, intelligent search, etc. The general idea is that data and services are semantically described with respect to ontologies, which are formal specifications of a domain of interest, and can thus be shared and reused in a way such that the shared meaning specified by the ontology remains formally the same across different parties and applications. As the cost of creating ontologies is relatively high, different proposals have emerged for learning ontologies from structured and unstructured resources. In this article we examine the maturity of techniques for ontology learning from textual resources, addressing the question whether the state-of-the-art is mature enough to produce ontologies 'on demand'.
  12. Li, J.; Zhang, Z.; Li, X.; Chen, H.: Kernel-based learning for biomedical relation extraction (2008) 0.03
    0.029444033 = product of:
      0.058888067 = sum of:
        0.058888067 = product of:
          0.11777613 = sum of:
            0.11777613 = weight(_text_:learning in 1611) [ClassicSimilarity], result of:
              0.11777613 = score(doc=1611,freq=6.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.51265645 = fieldWeight in 1611, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Relation extraction is the process of scanning text for relationships between named entities. Recently, significant studies have focused on automatically extracting relations from biomedical corpora. Most existing biomedical relation extractors require manual creation of biomedical lexicons or parsing templates based on domain knowledge. In this study, we propose to use kernel-based learning methods to automatically extract biomedical relations from literature text. We develop a framework of kernel-based learning for biomedical relation extraction. In particular, we modified the standard tree kernel function by incorporating a trace kernel to capture richer contextual information. In our experiments on a biomedical corpus, we compare different kernel functions for biomedical relation detection and classification. The experimental results show that a tree kernel outperforms word and sequence kernels for relation detection, our trace-tree kernel outperforms the standard tree kernel, and a composite kernel outperforms individual kernels for relation extraction.
  13. Jiang, X.; Tan, A.-H.: CRCTOL: a semantic-based domain ontology learning system (2009) 0.03
    0.028332531 = product of:
      0.056665063 = sum of:
        0.056665063 = product of:
          0.113330126 = sum of:
            0.113330126 = weight(_text_:learning in 3320) [ClassicSimilarity], result of:
              0.113330126 = score(doc=3320,freq=8.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.49330387 = fieldWeight in 3320, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3320)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Domain ontologies play an important role in supporting knowledge-based applications in the Semantic Web. To facilitate the building of ontologies, text mining techniques have been used to perform ontology learning from texts. However, traditional systems employ shallow natural language processing techniques and focus only on concept and taxonomic relation extraction. In this paper we present a system, known as Concept-Relation-Concept Tuple-based Ontology Learning (CRCTOL), for mining ontologies automatically from domain-specific documents. Specifically, CRCTOL adopts a full text parsing technique and employs a combination of statistical and lexico-syntactic methods, including a statistical algorithm that extracts key concepts from a document collection, a word sense disambiguation algorithm that disambiguates words in the key concepts, a rule-based algorithm that extracts relations between the key concepts, and a modified generalized association rule mining algorithm that prunes unimportant relations for ontology learning. As a result, the ontologies learned by CRCTOL are more concise and contain a richer semantics in terms of the range and number of semantic relations compared with alternative systems. We present two case studies where CRCTOL is used to build a terrorism domain ontology and a sport event domain ontology. At the component level, quantitative evaluation by comparing with Text-To-Onto and its successor Text2Onto has shown that CRCTOL is able to extract concepts and semantic relations with a significantly higher level of accuracy. At the ontology level, the quality of the learned ontologies is evaluated by either employing a set of quantitative and qualitative methods including analyzing the graph structural property, comparison to WordNet, and expert rating, or directly comparing with a human-edited benchmark ontology, demonstrating the high quality of the ontologies learned.
  14. Harbig, D.; Schneider, R.: Ontology Learning im Rahmen von MyShelf (2006) 0.03
    0.028047776 = product of:
      0.05609555 = sum of:
        0.05609555 = product of:
          0.1121911 = sum of:
            0.1121911 = weight(_text_:learning in 5781) [ClassicSimilarity], result of:
              0.1121911 = score(doc=5781,freq=4.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.48834592 = fieldWeight in 5781, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5781)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der vorliegende Artikel befasst sich mit maschinellem Lernen von Ontologien. Es werden verschiedene Ansätze zum Ontology Learning vorgestellt. Der Fokus liegt auf dem Einsatz maschineller Lernalgorithmen zum automatischen Erwerb von Ontologien für das virtuelle Bibliotheksregal MyShelf. Dieses bietet Benutzern bei der Recherche durch Ontology Switching einen flexibleren Zugang zu Informationsbeständen. Basierend auf Textkorpora wurden Lerntechniken angewandt, um deren Potential für die Erstellung von Ontologien zu überprüfen.
  15. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.027241003 = product of:
      0.054482006 = sum of:
        0.054482006 = product of:
          0.16344601 = sum of:
            0.16344601 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16344601 = score(doc=701,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  16. Chen, H.; Baptista Nunes, J.M.; Ragsdell, G.; An, X.: Somatic and cultural knowledge : drivers of a habitus-driven model of tacit knowledge acquisition (2019) 0.02
    0.024290087 = product of:
      0.048580173 = sum of:
        0.048580173 = product of:
          0.09716035 = sum of:
            0.09716035 = weight(_text_:learning in 5460) [ClassicSimilarity], result of:
              0.09716035 = score(doc=5460,freq=12.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.42291996 = fieldWeight in 5460, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5460)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this paper is to identify and explain the role of individual learning and development in acquiring tacit knowledge in the context of the inexorable and intense continuous change (technological and otherwise) that characterizes our society today, and also to investigate the software (SW) sector, which is at the core of contemporary continuous change and is a paradigm of effective and intrinsic knowledge sharing (KS). This makes the SW sector unique and different from others where KS is so hard to implement. Design/methodology/approach The study employed an inductive qualitative approach based on a multi-case study approach, composed of three successful SW companies in China. These companies are representative of the fabric of the sector, namely a small- and medium-sized enterprise, a large private company and a large state-owned enterprise. The fieldwork included 44 participants who were interviewed using a semi-structured script. The interview data were coded and interpreted following the Straussian grounded theory pattern of open coding, axial coding and selective coding. The process of interviewing was stopped when theoretical saturation was achieved after a careful process of theoretical sampling.
    Findings The findings of this research suggest that individual learning and development are deemed to be the fundamental feature for professional success and survival in the continuously changing environment of the SW industry today. However, individual learning was described by the participants as much more than a mere individual process. It involves a collective and participatory effort within the organization and the sector as a whole, and a KS process that transcends organizational, cultural and national borders. Individuals in particular are mostly motivated by the pressing need to face and adapt to the dynamic and changeable environments of today's digital society that is led by the sector. Software practitioners are continuously in need of learning, refreshing and accumulating tacit knowledge, partly because it is required by their companies, but also due to a sound awareness of continuous technical and technological changes that seem only to increase with the advances of information technology. This led to a clear theoretical understanding that the continuous change that faces the sector has led to individual acquisition of culture and somatic knowledge that in turn lay the foundation for not only the awareness of the need for continuous individual professional development but also for the creation of habitus related to KS and continuous learning. Originality/value The study reported in this paper shows that there is a theoretical link between the existence of conducive organizational and sector-wide somatic and cultural knowledge, and the success of KS practices that lead to individual learning and development. Therefore, the theory proposed suggests that somatic and cultural knowledge are crucial drivers for the creation of habitus of individual tacit knowledge acquisition. The paper further proposes a habitus-driven individual development (HDID) Theoretical Model that can be of use to both academics and practitioners interested in fostering and developing processes of KS and individual development in knowledge-intensive organizations.
  17. Bardhan, S.; Dutta, B.: ONCO: an ontology model for MOOC platforms (2022) 0.02
    0.024040952 = product of:
      0.048081905 = sum of:
        0.048081905 = product of:
          0.09616381 = sum of:
            0.09616381 = weight(_text_:learning in 1111) [ClassicSimilarity], result of:
              0.09616381 = score(doc=1111,freq=4.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.41858223 = fieldWeight in 1111, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the process of searching for a particular course on e-learning platforms, it is required to browse through different platforms, and it becomes a time-consuming process. To resolve the issue, an ontology has been developed that can provide single-point access to all the e-learning platforms. The modelled ONline Course Ontology (ONCO) is based on YAMO, METHONTOLOGY and IDEF5 and built on the Protégé ontology editing tool. ONCO is integrated with sample data and later evaluated using pre-defined competency questions. Complex SPARQL queries are executed to identify the effectiveness of the constructed ontology. The modelled ontology is able to retrieve all the sampled queries. The ONCO has been developed for the efficient retrieval of similar courses from massive open online course (MOOC) platforms.
  18. Thellefsen, M.: ¬The dynamics of information representation and knowledge mediation (2006) 0.02
    0.022666026 = product of:
      0.04533205 = sum of:
        0.04533205 = product of:
          0.0906641 = sum of:
            0.0906641 = weight(_text_:learning in 170) [ClassicSimilarity], result of:
              0.0906641 = score(doc=170,freq=2.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.3946431 = fieldWeight in 170, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0625 = fieldNorm(doc=170)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization for a global learning society: Proceedings of the 9th International ISKO Conference, 4-7 July 2006, Vienna, Austria. Hrsg.: G. Budin, C. Swertz u. K. Mitgutsch
  19. Developments in applied artificial intelligence : proceedings / 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2003, Loughborough, UK, June 23 - 26, 2003 (2003) 0.02
    0.020034127 = product of:
      0.040068254 = sum of:
        0.040068254 = product of:
          0.08013651 = sum of:
            0.08013651 = weight(_text_:learning in 441) [ClassicSimilarity], result of:
              0.08013651 = score(doc=441,freq=4.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.34881854 = fieldWeight in 441, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=441)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This book constitutes the refereed proceedings of the 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2003, held in Loughborough, UK in June 2003. The 81 revised full papers presented were carefully reviewed and selected from more than 140 submissions. Among the topics addressed are soft computing, fuzzy logic, diagnosis, knowledge representation, knowledge management, automated reasoning, machine learning, planning and scheduling, evolutionary computation, computer vision, agent systems, algorithmic learning, tutoring systems, financial analysis, etc.
  20. Pepper, S.: Topic maps (2009) 0.02
    0.019832773 = product of:
      0.039665546 = sum of:
        0.039665546 = product of:
          0.07933109 = sum of:
            0.07933109 = weight(_text_:learning in 3149) [ClassicSimilarity], result of:
              0.07933109 = score(doc=3149,freq=2.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.3453127 = fieldWeight in 3149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3149)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic Maps is an international standard technology for describing knowledge structures and using them to improve the findability of information. It is based on a formal model that subsumes those of traditional finding aids such as indexes, glossaries, and thesauri, and extends them to cater for the additional complexities of digital information. Topic Maps is increasingly used in enterprise information integration, knowledge management, e-learning, and digital libraries, and as the foundation for Web-based information delivery solutions. This entry provides a comprehensive treatment of the core concepts, as well as describing the background and current status of the standard and its relationship to traditional knowledge organization techniques.

Authors

Years

Languages

  • e 73
  • d 14

Types

  • a 64
  • el 19
  • x 7
  • m 5
  • s 2
  • n 1
  • r 1
  • More… Less…