Search (102 results, page 1 of 6)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.20
    0.1996701 = product of:
      0.29950514 = sum of:
        0.074876286 = product of:
          0.22462885 = sum of:
            0.22462885 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.22462885 = score(doc=400,freq=2.0), product of:
                0.39968264 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047143444 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.22462885 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.22462885 = score(doc=400,freq=2.0), product of:
            0.39968264 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047143444 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.17
    0.17446643 = product of:
      0.26169965 = sum of:
        0.049917527 = product of:
          0.14975257 = sum of:
            0.14975257 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14975257 = score(doc=5820,freq=2.0), product of:
                0.39968264 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047143444 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.21178211 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.21178211 = score(doc=5820,freq=4.0), product of:
            0.39968264 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047143444 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6666667 = coord(2/3)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.13
    0.13311341 = product of:
      0.1996701 = sum of:
        0.049917527 = product of:
          0.14975257 = sum of:
            0.14975257 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14975257 = score(doc=701,freq=2.0), product of:
                0.39968264 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047143444 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14975257 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14975257 = score(doc=701,freq=2.0), product of:
            0.39968264 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047143444 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.04
    0.036702015 = product of:
      0.110106036 = sum of:
        0.110106036 = sum of:
          0.08455689 = weight(_text_:methodology in 1634) [ClassicSimilarity], result of:
            0.08455689 = score(doc=1634,freq=8.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.3981634 = fieldWeight in 1634, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.025549144 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.025549144 = score(doc=1634,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  5. Almeida Campos, M.L. de; Machado Campos, M.L.; Dávila, A.M.R.; Espanha Gomes, H.; Campos, L.M.; Lira e Oliveira, L. de: Information sciences methodological aspects applied to ontology reuse tools : a study based on genomic annotations in the domain of trypanosomatides (2013) 0.03
    0.028261498 = product of:
      0.08478449 = sum of:
        0.08478449 = sum of:
          0.05284806 = weight(_text_:methodology in 635) [ClassicSimilarity], result of:
            0.05284806 = score(doc=635,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.24885213 = fieldWeight in 635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=635)
          0.03193643 = weight(_text_:22 in 635) [ClassicSimilarity], result of:
            0.03193643 = score(doc=635,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=635)
      0.33333334 = coord(1/3)
    
    Abstract
    Despite the dissemination of modeling languages and tools for representation and construction of ontologies, their underlying methodologies can still be improved. As a consequence, ontology tools can be enhanced accordingly, in order to support users through the ontology construction process. This paper proposes suggestions for ontology tools' improvement based on a case study within the domain of bioinformatics, applying a reuse method ology. Quantitative and qualitative analyses were carried out on a subset of 28 terms of Gene Ontology on a semi-automatic alignment with other biomedical ontologies. As a result, a report is presented containing suggestions for enhancing ontology reuse tools, which is a product derived from difficulties that we had in reusing a set of OBO ontologies. For the reuse process, a set of steps closely related to those of Pinto and Martin's methodology was used. In each step, it was observed that the experiment would have been significantly improved if ontology manipulation tools had provided certain features. Accordingly, problematic aspects in ontology tools are presented and suggestions are made aiming at getting better results in ontology reuse.
    Date
    22. 2.2013 12:03:53
  6. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.03
    0.028261498 = product of:
      0.08478449 = sum of:
        0.08478449 = sum of:
          0.05284806 = weight(_text_:methodology in 2589) [ClassicSimilarity], result of:
            0.05284806 = score(doc=2589,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.24885213 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
          0.03193643 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
            0.03193643 = score(doc=2589,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
  7. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.03
    0.028261498 = product of:
      0.08478449 = sum of:
        0.08478449 = sum of:
          0.05284806 = weight(_text_:methodology in 2831) [ClassicSimilarity], result of:
            0.05284806 = score(doc=2831,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.24885213 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
          0.03193643 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
            0.03193643 = score(doc=2831,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
      0.33333334 = coord(1/3)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  8. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.03
    0.028261498 = product of:
      0.08478449 = sum of:
        0.08478449 = sum of:
          0.05284806 = weight(_text_:methodology in 106) [ClassicSimilarity], result of:
            0.05284806 = score(doc=106,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.24885213 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
          0.03193643 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
            0.03193643 = score(doc=106,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  9. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.02
    0.022609197 = product of:
      0.06782759 = sum of:
        0.06782759 = sum of:
          0.042278446 = weight(_text_:methodology in 179) [ClassicSimilarity], result of:
            0.042278446 = score(doc=179,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.1990817 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.025549144 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.025549144 = score(doc=179,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  10. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.02
    0.019783048 = product of:
      0.059349142 = sum of:
        0.059349142 = sum of:
          0.03699364 = weight(_text_:methodology in 1633) [ClassicSimilarity], result of:
            0.03699364 = score(doc=1633,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.1741965 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
          0.0223555 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.0223555 = score(doc=1633,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  11. Sure, Y.; Studer, R.: ¬A methodology for ontology-based knowledge management (2004) 0.02
    0.018307107 = product of:
      0.054921318 = sum of:
        0.054921318 = product of:
          0.109842636 = sum of:
            0.109842636 = weight(_text_:methodology in 4400) [ClassicSimilarity], result of:
              0.109842636 = score(doc=4400,freq=6.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.5172295 = fieldWeight in 4400, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4400)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ontologies are a core element of the knowledge management architecture described in Chapter 1. In this chapter we describe a methodology for application driven ontology development, covering the whole project lifecycle from the kick off phase to the maintenance phase. Existing methodologies and practical ontology development experiences have in common that they start from the identification of the purpose of the ontology and the need for domain knowledge acquisition. They differ in their foci and following steps to be taken. In our approach of the ontology development process, we integrate aspects from existing methodologies and lessons learned from practical experience (as described in the Section 3.7). We put ontology development into a wider organizational context by performing an a priori feasibility study. The feasibility study is based on CommonKADS. We modified certain aspects of CommonKADS for a tight integration of the feasibility study into our methodology.
  12. Sanatjoo, A.: Development of thesaurus structure through a work-task oriented methodology 0.02
    0.015255922 = product of:
      0.045767765 = sum of:
        0.045767765 = product of:
          0.09153553 = sum of:
            0.09153553 = weight(_text_:methodology in 3536) [ClassicSimilarity], result of:
              0.09153553 = score(doc=3536,freq=6.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.43102458 = fieldWeight in 3536, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3536)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The development and changes in the field of digital information retrieval systems and information retrieval area, as well as technical advances require and offer possibilities for developing the functionality of thesauri. Enriching their structure require the development of thesaurus construction methodologies that exceed the potential of the traditional construction methods and adjust the thesaurus to the needs of specialized information environments. The present work extends the work-task oriented methodology (WOM) and involves an analysis of the domain of knowledge: the body of domain known facts, experts and paradigms. This empirical study investigated a mix set of methods and developed a prototype thesaurus to evaluate the potential of WOM for constructing more enriched thesaurus. The thesaurus was evaluated by a retrieval test in which the usability and performance of the thesaurus were investigated with a classic-type thesaurus (Agrovoc) with the conventional thesaurus structure. The results of study indicate that WOM is useful and provide valuable inspiration to the user, whether thesaurus compiler or information searcher. The work task oriented methodology allows the development of a thesaurus design that reflects the characteristics of the work domain.
  13. Boer, V. de; Wielemaker, J.; Gent, J. van; Hildebrand, M.; Isaac, A.; Ossenbruggen, J. van; Schreiber, G.: Supporting linked data production for cultural heritage institutes : the Amsterdam Museum case study (2012) 0.02
    0.015255922 = product of:
      0.045767765 = sum of:
        0.045767765 = product of:
          0.09153553 = sum of:
            0.09153553 = weight(_text_:methodology in 265) [ClassicSimilarity], result of:
              0.09153553 = score(doc=265,freq=6.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.43102458 = fieldWeight in 265, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=265)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Within the cultural heritage field, proprietary metadata and vocabularies are being transformed into public Linked Data. These efforts have mostly been at the level of large-scale aggregators such as Europeana where the original data is abstracted to a common format and schema. Although this approach ensures a level of consistency and interoperability, the richness of the original data is lost in the process. In this paper, we present a transparent and interactive methodology for ingesting, converting and linking cultural heritage metadata into Linked Data. The methodology is designed to maintain the richness and detail of the original metadata. We introduce the XMLRDF conversion tool and describe how it is integrated in the ClioPatria semantic web toolkit. The methodology and the tools have been validated by converting the Amsterdam Museum metadata to a Linked Data version. In this way, the Amsterdam Museum became the first 'small' cultural heritage institution with a node in the Linked Data cloud.
  14. Qin, J.; Creticos, P.; Hsiao, W.Y.: Adaptive modeling of workforce domain knowledge (2006) 0.01
    0.014947688 = product of:
      0.044843063 = sum of:
        0.044843063 = product of:
          0.089686126 = sum of:
            0.089686126 = weight(_text_:methodology in 2519) [ClassicSimilarity], result of:
              0.089686126 = score(doc=2519,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.42231607 = fieldWeight in 2519, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2519)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Workforce development is a multidisciplinary domain in which policy, laws and regulations, social services, training and education, and information technology and systems are heavily involved. It is essential to have a semantic base accepted by the workforce development community for knowledge sharing and exchange. This paper describes how such a semantic base-the Workforce Open Knowledge Exchange (WOKE) Ontology-was built by using the adaptive modeling approach. The focus of this paper is to address questions such as how ontology designers should extract and model concepts obtained from different sources and what methodologies are useful along the steps of ontology development. The paper proposes a methodology framework "adaptive modeling" and explains the methodology through examples and some lessons learned from the process of developing the WOKE ontology.
  15. Karapiperis, S.; Apostolou, D.: Consensus building in collaborative ontology engineering processes (2005) 0.01
    0.014947688 = product of:
      0.044843063 = sum of:
        0.044843063 = product of:
          0.089686126 = sum of:
            0.089686126 = weight(_text_:methodology in 3024) [ClassicSimilarity], result of:
              0.089686126 = score(doc=3024,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.42231607 = fieldWeight in 3024, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3024)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ontology development is time and money consuming as well as an error-prone process; the need for an embedded mechanism that evaluates quality and acceptance of the resultant collaborative ontology is apparent. Existing tools and methodologies lack consensus building mechanisms that must be employed in order for a team to cooperate and agree on the design and deployment process of a shared one. In this paper we describe a collaborative methodology for ontology development that supports a team to reach consensus through iterative evaluations and improvements. In every cycle of the iterative process, the structure of the collaborative ontology is revised and evolved. Finally, the process terminates when the participants have no more critiques and objections. We illustrate the methodology by creating an ontology for an airline training centre using the PROTÉGÉ software tool.
  16. Wang, Y.-H.; Jhuo, P.-S.: ¬A semantic faceted search with rule-based inference (2009) 0.01
    0.014947688 = product of:
      0.044843063 = sum of:
        0.044843063 = product of:
          0.089686126 = sum of:
            0.089686126 = weight(_text_:methodology in 540) [ClassicSimilarity], result of:
              0.089686126 = score(doc=540,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.42231607 = fieldWeight in 540, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=540)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Semantic Search has become an active research of Semantic Web in recent years. The classification methodology plays a pretty critical role in the beginning of search process to disambiguate irrelevant information. However, the applications related to Folksonomy suffer from many obstacles. This study attempts to eliminate the problems resulted from Folksonomy using existing semantic technology. We also focus on how to effectively integrate heterogeneous ontologies over the Internet to acquire the integrity of domain knowledge. A faceted logic layer is abstracted in order to strengthen category framework and organize existing available ontologies according to a series of steps based on the methodology of faceted classification and ontology construction. The result showed that our approach can facilitate the integration of inconsistent or even heterogeneous ontologies. This paper also generalizes the principles of picking appropriate facets with which our facet browser completely complies so that better semantic search result can be obtained.
  17. Wunner, T.; Buitelaar, P.; O'Riain, S.: Semantic, terminological and linguistic interpretation of XBRL (2010) 0.01
    0.014947688 = product of:
      0.044843063 = sum of:
        0.044843063 = product of:
          0.089686126 = sum of:
            0.089686126 = weight(_text_:methodology in 1122) [ClassicSimilarity], result of:
              0.089686126 = score(doc=1122,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.42231607 = fieldWeight in 1122, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1122)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Standardization efforts in fnancial reporting have led to large numbers of machine-interpretable vocabularies that attempt to model complex accounting practices in XBRL (eXtended Business Reporting Language). Because reporting agencies do not require fine-grained semantic and terminological representations, these vocabularies cannot be easily reused. Ontology-based Information Extraction, in particular, requires much greater semantic and terminological structure, and the introduction of a linguistic structure currently absent from XBRL. In order to facilitate such reuse, we propose a three-faceted methodology that analyzes and enriches the XBRL vocabulary: (1) transform semantic structure by analyzing the semantic relationships between terms (e.g. taxonomic, meronymic); (2) enhance terminological structure by using several domain-specific (XBRL), domain-related (SAPTerm, etc.) and domain-independent (GoogleDefine, Wikipedia, etc.) terminologies; and (3) add linguistic structure at term level (e.g. part-of-speech, morphology, syntactic arguments). This paper outlines a first experiment towards implementing this methodology on the International Financial Reporting Standard XBRL vocabulary.
  18. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.01
    0.014947688 = product of:
      0.044843063 = sum of:
        0.044843063 = product of:
          0.089686126 = sum of:
            0.089686126 = weight(_text_:methodology in 1142) [ClassicSimilarity], result of:
              0.089686126 = score(doc=1142,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.42231607 = fieldWeight in 1142, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1142)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We provide a brief, non-technical introduction to the text mining methodology known as "topic modeling." We summarize the theory and background of the method and discuss what kinds of things are found by topic models. Using a text corpus comprised of the eight articles from the special issue of Poetics on the subject of topic models, we run a topic model on these articles, both as a way to introduce the methodology and also to help summarize some of the ways in which social and cultural scientists are using topic models. We review some of the critiques and debates over the use of the method and finally, we link these developments back to some of the original innovations in the field of content analysis that were pioneered by Harold D. Lasswell and colleagues during and just after World War II.
  19. Amarger, F.; Chanet, J.-P.; Haemmerlé, O.; Hernandez, N.; Roussey, C.: SKOS sources transformations for ontology engineering : agronomical taxonomy use case (2014) 0.01
    0.014947688 = product of:
      0.044843063 = sum of:
        0.044843063 = product of:
          0.089686126 = sum of:
            0.089686126 = weight(_text_:methodology in 1593) [ClassicSimilarity], result of:
              0.089686126 = score(doc=1593,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.42231607 = fieldWeight in 1593, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1593)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Sources like thesauri or taxonomies are already used as input in ontology development process. Some of them are also published on the LOD using the SKOS format. Reusing this type of sources to build an ontology is not an easy task. The ontology developer has to face different syntax and different modelling goals. We propose in this paper a new methodology to transform several non-ontological sources into a single ontology. We take into account: the redundancy of the knowledge extracted from sources in order to discover the consensual knowledge and Ontology Design Patterns (ODPs) to guide the transformation process. We have evaluated our methodology by creating an ontology on wheat taxonomy from three sources: Agrovoc thesaurus, TaxRef taxonomy, NCBI taxonomy.
  20. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.01
    0.014092816 = product of:
      0.042278446 = sum of:
        0.042278446 = product of:
          0.08455689 = sum of:
            0.08455689 = weight(_text_:methodology in 537) [ClassicSimilarity], result of:
              0.08455689 = score(doc=537,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.3981634 = fieldWeight in 537, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0625 = fieldNorm(doc=537)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The presentation will examine the potential of facet analysis as a basis for determining status and relationships of concepts in subject based tools using a controlled vocabulary, and the extent to which it can be used as a general theory of knowledge organization as opposed to a methodology for structuring classifications only.

Authors

Years

Languages

  • e 89
  • d 11
  • pt 2
  • More… Less…

Types

  • a 84
  • el 19
  • x 6
  • m 3
  • n 1
  • r 1
  • s 1
  • More… Less…