Search (146 results, page 1 of 8)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.33
    0.32770604 = product of:
      0.49155906 = sum of:
        0.060683887 = product of:
          0.18205166 = sum of:
            0.18205166 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18205166 = score(doc=400,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18205166 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18205166 = score(doc=400,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.18205166 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18205166 = score(doc=400,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.06677184 = weight(_text_:propose in 400) [ClassicSimilarity], result of:
          0.06677184 = score(doc=400,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(4/6)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.29
    0.2855003 = product of:
      0.4282504 = sum of:
        0.040455926 = product of:
          0.121367775 = sum of:
            0.121367775 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.121367775 = score(doc=5820,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17163995 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17163995 = score(doc=5820,freq=4.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.17163995 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17163995 = score(doc=5820,freq=4.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.044514563 = weight(_text_:propose in 5820) [ClassicSimilarity], result of:
          0.044514563 = score(doc=5820,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.22691247 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6666667 = coord(4/6)
    
    Abstract
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.14
    0.14159574 = product of:
      0.28319147 = sum of:
        0.040455926 = product of:
          0.121367775 = sum of:
            0.121367775 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.121367775 = score(doc=701,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.121367775 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.121367775 = score(doc=701,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.121367775 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.121367775 = score(doc=701,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Vallet, D.; Fernández, M.; Castells, P.: ¬An ontology-based information retrieval model (2005) 0.03
    0.025739681 = product of:
      0.07721904 = sum of:
        0.06677184 = weight(_text_:propose in 4708) [ClassicSimilarity], result of:
          0.06677184 = score(doc=4708,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 4708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=4708)
        0.0104471985 = product of:
          0.031341594 = sum of:
            0.031341594 = weight(_text_:29 in 4708) [ClassicSimilarity], result of:
              0.031341594 = score(doc=4708,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23319192 = fieldWeight in 4708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4708)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontologybased KBs to improve search over large document repositories. Our approach includes an ontology-based scheme for the semi-automatic annotation of documents, and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with keyword-based search to achieve tolerance to KB incompleteness. Our proposal is illustrated with sample experiments showing improvements with respect to keyword-based search, and providing ground for further research and discussion.
    Source
    The Semantic Web: research and applications ; second European Semantic WebConference, ESWC 2005, Heraklion, Crete, Greece, May 29 - June 1, 2005 ; proceedings. Eds.: A. Gómez-Pérez u. J. Euzenat
  5. Amarger, F.; Chanet, J.-P.; Haemmerlé, O.; Hernandez, N.; Roussey, C.: SKOS sources transformations for ontology engineering : agronomical taxonomy use case (2014) 0.03
    0.025739681 = product of:
      0.07721904 = sum of:
        0.06677184 = weight(_text_:propose in 1593) [ClassicSimilarity], result of:
          0.06677184 = score(doc=1593,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 1593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=1593)
        0.0104471985 = product of:
          0.031341594 = sum of:
            0.031341594 = weight(_text_:29 in 1593) [ClassicSimilarity], result of:
              0.031341594 = score(doc=1593,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23319192 = fieldWeight in 1593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1593)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Sources like thesauri or taxonomies are already used as input in ontology development process. Some of them are also published on the LOD using the SKOS format. Reusing this type of sources to build an ontology is not an easy task. The ontology developer has to face different syntax and different modelling goals. We propose in this paper a new methodology to transform several non-ontological sources into a single ontology. We take into account: the redundancy of the knowledge extracted from sources in order to discover the consensual knowledge and Ontology Design Patterns (ODPs) to guide the transformation process. We have evaluated our methodology by creating an ontology on wheat taxonomy from three sources: Agrovoc thesaurus, TaxRef taxonomy, NCBI taxonomy.
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  6. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.023285082 = product of:
      0.06985524 = sum of:
        0.0629531 = weight(_text_:propose in 1634) [ClassicSimilarity], result of:
          0.0629531 = score(doc=1634,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3209027 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.006902146 = product of:
          0.020706438 = sum of:
            0.020706438 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.020706438 = score(doc=1634,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  7. Koenderink, N.J.J.P.; Assem, M. van; Hulzebos, J.L.; Broekstra, J.; Top, J.L.: ROC: a method for proto-ontology construction by domain experts (2008) 0.02
    0.021449735 = product of:
      0.064349204 = sum of:
        0.055643205 = weight(_text_:propose in 4647) [ClassicSimilarity], result of:
          0.055643205 = score(doc=4647,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 4647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4647)
        0.008706 = product of:
          0.026117997 = sum of:
            0.026117997 = weight(_text_:29 in 4647) [ClassicSimilarity], result of:
              0.026117997 = score(doc=4647,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.19432661 = fieldWeight in 4647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4647)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Ontology construction is a labour-intensive and costly process. Even though many formal and semi-formal vocabularies are available, creating an ontology for a specific application is hindered in a number of ways. Firstly, the process of elicitating concepts is a time consuming and strenuous process. Secondly, it is difficult to keep focus. Thirdly, technical modelling constructs are hard to understand for the uninitiated. We propose ROC as a method to cope with these problems. ROC builds on well-known approaches for ontology construction. However, we reuse existing sources to generate a repository of proposed associations. ROC assists in efficiently putting forward all relevant concepts and relations by providing a large set of potential candidate associations. Secondly, rather than using intermediate representations of formal constructs we confront the domain expert with 'natural-language-like' statements generated from RDF-based triples. Moreover, we strictly separate the roles of problem owner, domain expert and knowledge engineer, each having his own responsibilities and skills. The domain expert and problem owner keep focus by monitoring a well-defined application purpose. We have implemented an initial set of tools to support ROC. This paper describes the ROC method and two application cases in which we evaluate the overall approach.
    Date
    29. 7.2011 14:44:56
  8. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.02
    0.021449735 = product of:
      0.064349204 = sum of:
        0.055643205 = weight(_text_:propose in 3437) [ClassicSimilarity], result of:
          0.055643205 = score(doc=3437,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
        0.008706 = product of:
          0.026117997 = sum of:
            0.026117997 = weight(_text_:29 in 3437) [ClassicSimilarity], result of:
              0.026117997 = score(doc=3437,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.19432661 = fieldWeight in 3437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3437)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
    Date
    16.11.2017 13:29:12
  9. Zhitomirsky-Geffet, M.; Erez, E.S.; Bar-Ilan, J.: Toward multiviewpoint ontology construction by collaboration of non-experts and crowdsourcing : the case of the effect of diet on health (2017) 0.02
    0.021449735 = product of:
      0.064349204 = sum of:
        0.055643205 = weight(_text_:propose in 3439) [ClassicSimilarity], result of:
          0.055643205 = score(doc=3439,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 3439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3439)
        0.008706 = product of:
          0.026117997 = sum of:
            0.026117997 = weight(_text_:29 in 3439) [ClassicSimilarity], result of:
              0.026117997 = score(doc=3439,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.19432661 = fieldWeight in 3439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3439)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Domain experts are skilled in buliding a narrow ontology that reflects their subfield of expertise based on their work experience and personal beliefs. We call this type of ontology a single-viewpoint ontology. There can be a variety of such single viewpoint ontologies that represent a wide spectrum of subfields and expert opinions on the domain. However, to have a complete formal vocabulary for the domain they need to be linked and unified into a multiviewpoint model while having the subjective viewpoint statements marked and distinguished from the objectively true statements. In this study, we propose and implement a two-phase methodology for multiviewpoint ontology construction by nonexpert users. The proposed methodology was implemented for the domain of the effect of diet on health. A large-scale crowdsourcing experiment was conducted with about 750 ontological statements to determine whether each of these statements is objectively true, viewpoint, or erroneous. Typically, in crowdsourcing experiments the workers are asked for their personal opinions on the given subject. However, in our case their ability to objectively assess others' opinions was examined as well. Our results show substantially higher accuracy in classification for the objective assessment approach compared to the results based on personal opinions.
    Date
    16.11.2017 13:29:37
  10. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.02
    0.021423629 = product of:
      0.064270884 = sum of:
        0.055643205 = weight(_text_:propose in 3466) [ClassicSimilarity], result of:
          0.055643205 = score(doc=3466,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 3466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
        0.008627683 = product of:
          0.025883049 = sum of:
            0.025883049 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
              0.025883049 = score(doc=3466,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.19345059 = fieldWeight in 3466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3466)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Specimen identification keys are still the most commonly created tools used by systematic biologists to access biodiversity information. Creating identification keys requires analyzing and synthesizing large amounts of information from specimens and their descriptions and is a very labor-intensive and time-consuming activity. Automating the generation of identification keys from text descriptions becomes a highly attractive text mining application in the biodiversity domain. Fine-grained semantic annotation of morphological descriptions of organisms is a necessary first step in generating keys from text. Machine-readable ontologies are needed in this process because most biological characters are only implied (i.e., not stated) in descriptions. The immediate question to ask is How well do existing ontologies support semantic annotation and automated key generation? With the intention to either select an existing ontology or develop a unified ontology based on existing ones, this paper evaluates the coverage, semantic consistency, and inter-ontology agreement of a biodiversity character ontology and three plant glossaries that may be turned into ontologies. The coverage and semantic consistency of the ontology/glossaries are checked against the authoritative domain literature, namely, Flora of North America and Flora of China. The evaluation results suggest that more work is needed to improve the coverage and interoperability of the ontology/glossaries. More concepts need to be added to the ontology/glossaries and careful work is needed to improve the semantic consistency. The method used in this paper to evaluate the ontology/glossaries can be used to propose new candidate concepts from the domain literature and suggest appropriate definitions.
    Date
    1. 6.2010 9:55:22
  11. Barsalou, L.W.: Frames, concepts, and conceptual fields (1992) 0.02
    0.016062811 = product of:
      0.096376866 = sum of:
        0.096376866 = weight(_text_:propose in 3217) [ClassicSimilarity], result of:
          0.096376866 = score(doc=3217,freq=6.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.49127996 = fieldWeight in 3217, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3217)
      0.16666667 = coord(1/6)
    
    Abstract
    In this chapter I propose that frames provide the fundamental representation of knowledge in human cognition. In the first section, I raise problems with the feature list representations often found in theories of knowledge, and I sketch the solutions that frames provide to them. In the second section, I examine the three fundamental concepts of frames: attribute-value sets, structural invariants, and constraints. Because frames also represents the attributes, values, structural invariants, and constraints within a frame, the mechanism that constructs frames builds them recursively. The frame theory I propose borrows heavily from previous frame theories, although its collection of representational components is somewhat unique. Furthermore, frame theorists generally assume that frames are rigid configurations of independent attributes, whereas I propose that frames are dynamic relational structures whose form is flexible and context dependent. In the third section, I illustrate how frames support a wide variety of representational tasks central to conceptual processing in natural and artificial intelligence. Frames can represent exemplars and propositions, prototypes and membership, subordinates and taxonomies. Frames can also represent conceptual combinations, event sequences, rules, and plans. In the fourth section, I show how frames define the extent of conceptual fields and how they provide a powerful productive mechanism for generating specific concepts within a field.
  12. Hanke, P.; Mandl, T.; Womser-Hacker, C.: ¬Ein "Virtuelles Bibliotheksregal" für die Informationswissenschaft als Anwendungsfall semantischer Heterogenität (2002) 0.01
    0.013321342 = product of:
      0.07992805 = sum of:
        0.07992805 = weight(_text_:forschung in 4858) [ClassicSimilarity], result of:
          0.07992805 = score(doc=4858,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.43000343 = fieldWeight in 4858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0625 = fieldNorm(doc=4858)
      0.16666667 = coord(1/6)
    
    Footnote
    Vgl. auch: http://www.uni-hildesheim.de/~mandl/Forschung/MyShelf/virtbibVortrag.PDF.
  13. Lee, J.; Min, J.-K.; Oh, A.; Chung, C.-W.: Effective ranking and search techniques for Web resources considering semantic relationships (2014) 0.01
    0.01311523 = product of:
      0.07869138 = sum of:
        0.07869138 = weight(_text_:propose in 2670) [ClassicSimilarity], result of:
          0.07869138 = score(doc=2670,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.40112838 = fieldWeight in 2670, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2670)
      0.16666667 = coord(1/6)
    
    Abstract
    On the Semantic Web, the types of resources and the semantic relationships between resources are defined in an ontology. By using that information, the accuracy of information retrieval can be improved. In this paper, we present effective ranking and search techniques considering the semantic relationships in an ontology. Our technique retrieves top-k resources which are the most relevant to query keywords through the semantic relationships. To do this, we propose a weighting measure for the semantic relationship. Based on this measure, we propose a novel ranking method which considers the number of meaningful semantic relationships between a resource and keywords as well as the coverage and discriminating power of keywords. In order to improve the efficiency of the search, we prune the unnecessary search space using the length and weight thresholds of the semantic relationship path. In addition, we exploit Threshold Algorithm based on an extended inverted index to answer top-k results efficiently. The experimental results using real data sets demonstrate that our retrieval method using the semantic information generates accurate results efficiently compared to the traditional methods.
  14. Pankowski, T.: Ontological databases with faceted queries (2022) 0.01
    0.01311523 = product of:
      0.07869138 = sum of:
        0.07869138 = weight(_text_:propose in 666) [ClassicSimilarity], result of:
          0.07869138 = score(doc=666,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.40112838 = fieldWeight in 666, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=666)
      0.16666667 = coord(1/6)
    
    Abstract
    The success of the use of ontology-based systems depends on efficient and user-friendly methods of formulating queries against the ontology. We propose a method to query a class of ontologies, called facet ontologies ( fac-ontologies ), using a faceted human-oriented approach. A fac-ontology has two important features: (a) a hierarchical view of it can be defined as a nested facet over this ontology and the view can be used as a faceted interface to create queries and to explore the ontology; (b) the ontology can be converted into an ontological database , the ABox of which is stored in a database, and the faceted queries are evaluated against this database. We show that the proposed faceted interface makes it possible to formulate queries that are semantically equivalent to $${\mathcal {SROIQ}}^{Fac}$$ SROIQ Fac , a limited version of the $${\mathcal {SROIQ}}$$ SROIQ description logic. The TBox of a fac-ontology is divided into a set of rules defining intensional predicates and a set of constraint rules to be satisfied by the database. We identify a class of so-called reflexive weak cycles in a set of constraint rules and propose a method to deal with them in the chase procedure. The considerations are illustrated with solutions implemented in the DAFO system ( data access based on faceted queries over ontologies ).
  15. Jimeno-Yepes, A.; Berlanga Llavori, R.; Rebholz-Schuhmann, D.: Ontology refinement for improved information retrieval (2010) 0.01
    0.012983414 = product of:
      0.077900484 = sum of:
        0.077900484 = weight(_text_:propose in 4234) [ClassicSimilarity], result of:
          0.077900484 = score(doc=4234,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3970968 = fieldWeight in 4234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4234)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontologies are frequently used in information retrieval being their main applications the expansion of queries, semantic indexing of documents and the organization of search results. Ontologies provide lexical items, allow conceptual normalization and provide different types of relations. However, the optimization of an ontology to perform information retrieval tasks is still unclear. In this paper, we use an ontology query model to analyze the usefulness of ontologies in effectively performing document searches. Moreover, we propose an algorithm to refine ontologies for information retrieval tasks with preliminary positive results.
  16. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.012983414 = product of:
      0.077900484 = sum of:
        0.077900484 = weight(_text_:propose in 572) [ClassicSimilarity], result of:
          0.077900484 = score(doc=572,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3970968 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.16666667 = coord(1/6)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  17. Ehrig, M.; Studer, R.: Wissensvernetzung durch Ontologien (2006) 0.01
    0.011774513 = product of:
      0.070647076 = sum of:
        0.070647076 = weight(_text_:forschung in 5901) [ClassicSimilarity], result of:
          0.070647076 = score(doc=5901,freq=4.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.38007292 = fieldWeight in 5901, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
      0.16666667 = coord(1/6)
    
    Abstract
    In der Informatik sind Ontologien formale Modelle eines Anwendungsbereiches, die die Kommunikation zwischen menschlichen und/oder maschinellen Akteuren unterstützen und damit den Austausch und das Teilen von Wissen in Unternehmen erleichtern. Ontologien zur strukturierten Darstellung von Wissen zu nutzen hat deshalb in den letzten Jahren zunehmende Verbreitung gefunden. Schon heute existieren weltweit tausende Ontologien. Um Interoperabilität zwischen darauf aufbauenden Softwareagenten oder Webservices zu ermöglichen, ist die semantische Integration der Ontologien eine zwingendnotwendige Vorraussetzung. Wie man sieh leicht verdeutlichen kann, ist die rein manuelle Erstellung der Abbildungen ab einer bestimmten Größe. Komplexität und Veränderungsrate der Ontologien nicht mehr ohne weiteres möglich. Automatische oder semiautomatische Technologien müssen den Nutzer darin unterstützen. Das Integrationsproblem beschäftigt Forschung und Industrie schon seit vielen Jahren z. B. im Bereich der Datenbankintegration. Neu ist jedoch die Möglichkeit komplexe semantische Informationen. wie sie in Ontologien vorhanden sind, einzubeziehen. Zur Ontologieintegration wird in diesem Kapitel ein sechsstufiger genereller Prozess basierend auf den semantischen Strukturen eingeführt. Erweiterungen beschäftigen sich mit der Effizienz oder der optimalen Nutzereinbindung in diesen Prozess. Außerdem werden zwei Anwendungen vorgestellt, in denen dieser Prozess erfolgreich umgesetzt wurde. In einem abschließenden Fazit werden neue aktuelle Trends angesprochen. Da die Ansätze prinzipiell auf jedes Schema übertragbar sind, das eine semantische Basis enthält, geht der Einsatzbereich dieser Forschung weit über reine Ontologieanwendungen hinaus.
  18. Harbig, D.; Schneider, R.: Ontology Learning im Rahmen von MyShelf (2006) 0.01
    0.011656174 = product of:
      0.06993704 = sum of:
        0.06993704 = weight(_text_:forschung in 5781) [ClassicSimilarity], result of:
          0.06993704 = score(doc=5781,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.376253 = fieldWeight in 5781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5781)
      0.16666667 = coord(1/6)
    
    Footnote
    Vgl. auch: http://www.uni-hildesheim.de/~mandl/Forschung/MyShelf/MyShelf.htm.
  19. Stuckenschmidt, H.: Ontologien : Konzepte, Technologien und Anwendungen (2009) 0.01
    0.011656174 = product of:
      0.06993704 = sum of:
        0.06993704 = weight(_text_:forschung in 37) [ClassicSimilarity], result of:
          0.06993704 = score(doc=37,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.376253 = fieldWeight in 37, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0546875 = fieldNorm(doc=37)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontologien haben durch die aktuellen Entwicklungen des Semantic Web große Beachtung erfahren, da jetzt Technologien bereitgestellt werden, die eine Verwendung von Ontologien in Informationssystemen ermöglichen. Beginnend mit den grundlegenden Konzepten und Ideen von Ontologien, die der Philosophie und Linguistik entstammen, stellt das Buch den aktuellen Stand der Technik im Bereich unterstützender Technologien aus der Semantic Web Forschung dar und zeigt vielversprechende Anwendungsbiete auf.
  20. Fonseca, F.: ¬The double role of ontologies in information science research (2007) 0.01
    0.011128641 = product of:
      0.06677184 = sum of:
        0.06677184 = weight(_text_:propose in 277) [ClassicSimilarity], result of:
          0.06677184 = score(doc=277,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
      0.16666667 = coord(1/6)
    
    Abstract
    In philosophy, Ontology is the basic description of things in the world. In information science, an ontology refers to an engineering artifact, constituted by a specific vocabulary used to describe a certain reality. Ontologies have been proposed for validating both conceptual models and conceptual schemas; however, these roles are quite dissimilar. In this article, we show that ontologies can be better understood if we classify the different uses of the term as it appears in the literature. First, we explain Ontology (upper case O) as used in Philosophy. Then, we propose a differentiation between ontologies of information systems and ontologies for information systems. All three concepts have an important role in information science. We clarify the different meanings and uses of Ontology and ontologies through a comparison of research by Wand and Weber and by Guarino in ontology-driven information systems. The contributions of this article are twofold: (a) It provides a better understanding of what ontologies are, and (b) it explains the double role of ontologies in information science research.

Authors

Years

Languages

  • e 117
  • d 25
  • f 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 109
  • el 39
  • m 9
  • x 9
  • s 3
  • n 1
  • r 1
  • More… Less…

Subjects