Search (419 results, page 1 of 21)

  • × type_ss:"a"
  • × theme_ss:"Wissensrepräsentation"
  1. Almeida Campos, M.L. de; Machado Campos, M.L.; Dávila, A.M.R.; Espanha Gomes, H.; Campos, L.M.; Lira e Oliveira, L. de: Information sciences methodological aspects applied to ontology reuse tools : a study based on genomic annotations in the domain of trypanosomatides (2013) 0.07
    0.07152634 = product of:
      0.10728951 = sum of:
        0.011494976 = weight(_text_:a in 635) [ClassicSimilarity], result of:
          0.011494976 = score(doc=635,freq=24.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22065444 = fieldWeight in 635, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=635)
        0.095794536 = sum of:
          0.06518805 = weight(_text_:de in 635) [ClassicSimilarity], result of:
            0.06518805 = score(doc=635,freq=4.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.33574134 = fieldWeight in 635, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=635)
          0.030606484 = weight(_text_:22 in 635) [ClassicSimilarity], result of:
            0.030606484 = score(doc=635,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.19345059 = fieldWeight in 635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=635)
      0.6666667 = coord(2/3)
    
    Abstract
    Despite the dissemination of modeling languages and tools for representation and construction of ontologies, their underlying methodologies can still be improved. As a consequence, ontology tools can be enhanced accordingly, in order to support users through the ontology construction process. This paper proposes suggestions for ontology tools' improvement based on a case study within the domain of bioinformatics, applying a reuse method ology. Quantitative and qualitative analyses were carried out on a subset of 28 terms of Gene Ontology on a semi-automatic alignment with other biomedical ontologies. As a result, a report is presented containing suggestions for enhancing ontology reuse tools, which is a product derived from difficulties that we had in reusing a set of OBO ontologies. For the reuse process, a set of steps closely related to those of Pinto and Martin's methodology was used. In each step, it was observed that the experiment would have been significantly improved if ontology manipulation tools had provided certain features. Accordingly, problematic aspects in ontology tools are presented and suggestions are made aiming at getting better results in ontology reuse.
    Date
    22. 2.2013 12:03:53
    Type
    a
  2. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.05
    0.054965924 = product of:
      0.082448885 = sum of:
        0.005747488 = weight(_text_:a in 633) [ClassicSimilarity], result of:
          0.005747488 = score(doc=633,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.11032722 = fieldWeight in 633, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.076701395 = sum of:
          0.046094913 = weight(_text_:de in 633) [ClassicSimilarity], result of:
            0.046094913 = score(doc=633,freq=2.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.23740499 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.030606484 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.030606484 = score(doc=633,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.6666667 = coord(2/3)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
    Type
    a
  3. Oliveira Lima, G.A.B. de: Hypertext model - HTXM : a model for hypertext organization of documents (2008) 0.05
    0.046070676 = product of:
      0.06910601 = sum of:
        0.008128175 = weight(_text_:a in 2504) [ClassicSimilarity], result of:
          0.008128175 = score(doc=2504,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15602624 = fieldWeight in 2504, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2504)
        0.06097784 = product of:
          0.12195568 = sum of:
            0.12195568 = weight(_text_:de in 2504) [ClassicSimilarity], result of:
              0.12195568 = score(doc=2504,freq=14.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.6281146 = fieldWeight in 2504, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2504)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    This article reports an applied research on the construction and implementation of a semantically structured conceptual prototype to help in the organization and representation of human knowledge in hypertextual systems, based on four references: the Facet Analysis Theory (FAT), the Conceptual Map Theory, semantic structure of hypertext links and the technical guidelines of the Associacao Brasileira de Normas Técnicas (ABNT). This prototype, called Modelo Hipertextual para Organizacao de Documentos (MHTX) - Model For Hypertext Organization of Documents HTXM - is formed by a semantic structure called Conceptual Map (CM) and Expanded Summary (ES), the latter based on the summary of a selected doctoral thesis to which access points were designed. In the future, this prototype maybe used to implement a digital libraty called BTDECI - UFMG (Biblioteca de Teses e Dissertacöes do Programa de Pós-Graduacao da Escola de Ciência da Informacao da UFMG - Library of Theses and Dissertations of the Graduate Program of School of Information Science of Universidade Federal de Minas Gerais).
    Type
    a
  4. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.05
    0.045450564 = product of:
      0.068175845 = sum of:
        0.05381863 = product of:
          0.21527451 = sum of:
            0.21527451 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.21527451 = score(doc=400,freq=2.0), product of:
                0.38303843 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045180224 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.25 = coord(1/4)
        0.0143572185 = weight(_text_:a in 400) [ClassicSimilarity], result of:
          0.0143572185 = score(doc=400,freq=26.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.27559727 = fieldWeight in 400, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  5. Almeida, M.B.: Ontologia em Ciência da Informação: Teoria e Método (1ª ed., Vol. 1). CRV. http://dx.doi.org/10.24824/978655578679.8; Tecnologia e Aplicações (1ª ed., Vol. 2). CRV. http://dx.doi.org/10.24824/978652511477.4; Curso completo com teoria e exercícios (1ª ed., volume suplementar para professores). CRV. [Review] (2022) 0.04
    0.042185232 = product of:
      0.06327785 = sum of:
        0.007963953 = weight(_text_:a in 631) [ClassicSimilarity], result of:
          0.007963953 = score(doc=631,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 631, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=631)
        0.055313893 = product of:
          0.110627785 = sum of:
            0.110627785 = weight(_text_:de in 631) [ClassicSimilarity], result of:
              0.110627785 = score(doc=631,freq=8.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.56977195 = fieldWeight in 631, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=631)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Nos últimos 30 anos, o tema das ontologias tem sido um dos terrenos mais férteis de investigação na área da Organização do Conhecimento. É um tema complexo e revestido de polémica, pela dificuldade na definição do próprio conceito e pelas apropriações que diferentes campos científicos têm exercido sobre ele. Com origem no domínio da filosofia, a ontologia é hoje um território partilhado pelas Ciências da Computação, com destaque para a Ciência dos Dados (Data Science), e pela Ciência da Informação, particularmente pela Organização do Conhecimento. São raros os autores desta área que não escreveram sobre o tema, abordando as suas fronteiras conceptuais ou discutindo a relação das ontologias com outros sistemas de organização do conhecimento, como as taxonomias, os tesauros ou as classificações.
    Source
    Boletim do Arquivo da Universidade de Coimbra 35(2022) no.1, S.191-198
    Type
    a
  6. Maculan, B.C.M. dos; Lima, G.A. de; Oliveira, E.D.: Conversion methods from thesaurus to ontologies : a review (2016) 0.04
    0.04184603 = product of:
      0.06276904 = sum of:
        0.010618603 = weight(_text_:a in 4695) [ClassicSimilarity], result of:
          0.010618603 = score(doc=4695,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 4695, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4695)
        0.05215044 = product of:
          0.10430088 = sum of:
            0.10430088 = weight(_text_:de in 4695) [ClassicSimilarity], result of:
              0.10430088 = score(doc=4695,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.53718615 = fieldWeight in 4695, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4695)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
    Type
    a
  7. Campos, L.M.: Princípios teóricos usados na elaboracao de ontologias e sua influência na recuperacao da informacao com uso de de inferências [Theoretical principles used in ontology building and their influence on information retrieval using inferences] (2021) 0.04
    0.0365829 = product of:
      0.054874346 = sum of:
        0.008779433 = weight(_text_:a in 826) [ClassicSimilarity], result of:
          0.008779433 = score(doc=826,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1685276 = fieldWeight in 826, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=826)
        0.046094913 = product of:
          0.092189826 = sum of:
            0.092189826 = weight(_text_:de in 826) [ClassicSimilarity], result of:
              0.092189826 = score(doc=826,freq=8.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.47480997 = fieldWeight in 826, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=826)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Several instruments of knowledge organization will reflect different possibilities for information retrieval. In this context, ontologies have a different potential because they allow knowledge discovery, which can be used to retrieve information in a more flexible way. However, this potential can be affected by the theoretical principles adopted in ontology building. The aim of this paper is to discuss, in an introductory way, how a (not exhaustive) set of theoretical principles can influence an aspect of ontologies: their use to obtain inferences. In this context, the role of Ingetraut Dahlberg's Theory of Concept is discussed. The methodology is exploratory, qualitative, and from the technical point of view it uses bibliographic research supported by the content analysis method. It also presents a small example of application as a proof of concept. As results, a discussion about the influence of conceptual definition on subsumption inferences is presented, theoretical contributions are suggested that should be used to guide the formation of hierarchical structures on which such inferences are supported, and examples are provided of how the absence of such contributions can lead to erroneous inferences
    Source
    Ponto de Acesso, Salvador. 15(2021) no.3, S.344-380
    Type
    a
  8. Fagundes, P.B.; Freund, G.P.; Vital, L.P.; Monteiro de Barros, C.; Macedo, D.D.J.de: Taxonomias, ontologias e tesauros : possibilidades de contribuição para o processo de Engenharia de Requisitos (2020) 0.03
    0.0345616 = product of:
      0.0518424 = sum of:
        0.005747488 = weight(_text_:a in 5828) [ClassicSimilarity], result of:
          0.005747488 = score(doc=5828,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.11032722 = fieldWeight in 5828, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
        0.046094913 = product of:
          0.092189826 = sum of:
            0.092189826 = weight(_text_:de in 5828) [ClassicSimilarity], result of:
              0.092189826 = score(doc=5828,freq=8.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.47480997 = fieldWeight in 5828, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5828)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Some of the fundamental activities of the software development process are related to the discipline of Requirements Engineering, whose objective is the discovery, analysis, documentation and verification of the requirements that will be part of the system. Requirements are the conditions or capabilities that software must have or perform to meet the users needs. The present study is being developed to propose a model of cooperation between Information Science and Requirements Engineering. Aims to present the analysis results on the possibilities of using the knowledge organization systems: taxonomies, thesauri and ontologies during the activities of Requirements Engineering: design, survey, elaboration, negotiation, specification, validation and requirements management. From the results obtained it was possible to identify in which stage of the Requirements Engineering process, each type of knowledge organization system could be used. We expect that this study put in evidence the need for new researchs and proposals to strengt the exchange between Information Science, as a science that has information as object of study, and the Requirements Engineering which has in the information the raw material to identify the informational needs of software users.
    Type
    a
  9. Almeida Campos, M.L. de; Espanha Gomes, H.: Ontology : several theories on the representation of knowledge domains (2017) 0.03
    0.030714607 = product of:
      0.04607191 = sum of:
        0.009195981 = weight(_text_:a in 3839) [ClassicSimilarity], result of:
          0.009195981 = score(doc=3839,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17652355 = fieldWeight in 3839, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3839)
        0.03687593 = product of:
          0.07375186 = sum of:
            0.07375186 = weight(_text_:de in 3839) [ClassicSimilarity], result of:
              0.07375186 = score(doc=3839,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.37984797 = fieldWeight in 3839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3839)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies may be considered knowledge organization systems since the elements interact in a consistent conceptual structure. Theories of the representation of knowledge domains produce models that include definition, representation units, and semantic relationships that are essential for structuring such domain models. A realist viewpoint is proposed to enhance domain ontologies, as definitions provide structure that reveals not only ontological commitment but also relationships between unit representations.
    Type
    a
  10. Hepp, M.; Bruijn, J. de: GenTax : a generic methodology for deriving OWL and RDF-S ontologies from hierarchical classifications, thesauri, and inconsistent taxonomies (2007) 0.03
    0.02906642 = product of:
      0.043599628 = sum of:
        0.011005601 = weight(_text_:a in 4692) [ClassicSimilarity], result of:
          0.011005601 = score(doc=4692,freq=22.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21126054 = fieldWeight in 4692, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4692)
        0.032594025 = product of:
          0.06518805 = sum of:
            0.06518805 = weight(_text_:de in 4692) [ClassicSimilarity], result of:
              0.06518805 = score(doc=4692,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33574134 = fieldWeight in 4692, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4692)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Hierarchical classifications, thesauri, and informal taxonomies are likely the most valuable input for creating, at reasonable cost, non-toy ontologies in many domains. They contain, readily available, a wealth of category definitions plus a hierarchy, and they reflect some degree of community consensus. However, their transformation into useful ontologies is not as straightforward as it appears. In this paper, we show that (1) it often depends on the context of usage whether an informal hierarchical categorization schema is a classification, a thesaurus, or a taxonomy, and (2) present a novel methodology for automatically deriving consistent RDF-S and OWL ontologies from such schemas. Finally, we (3) demonstrate the usefulness of this approach by transforming the two e-business categorization standards eCl@ss and UNSPSC into ontologies that overcome the limitations of earlier prototypes. Our approach allows for the script-based creation of meaningful ontology classes for a particular context while preserving the original hierarchy, even if the latter is not a real subsumption hierarchy in this particular context. Human intervention in the transformation is limited to checking some conceptual properties and identifying frequent anomalies, and the only input required is an informal categorization plus a notion of the target context. In particular, the approach does not require instance data, as ontology learning approaches would usually do.
    Content
    Vgl. unter: http://www.heppnetz.de/files/hepp-de-bruijn-ESWC2007-gentax-CRC.pdf.
    Type
    a
  11. Collard, J.; Paiva, V. de; Fong, B.; Subrahmanian, E.: Extracting mathematical concepts from text (2022) 0.03
    0.02843627 = product of:
      0.042654403 = sum of:
        0.010387965 = weight(_text_:a in 668) [ClassicSimilarity], result of:
          0.010387965 = score(doc=668,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 668, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=668)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 668) [ClassicSimilarity], result of:
              0.064532876 = score(doc=668,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences).
    Type
    a
  12. Zeh, T.: Ontologien in den Informationswissenschaften (2011) 0.03
    0.028123489 = product of:
      0.042185232 = sum of:
        0.0053093014 = weight(_text_:a in 4981) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=4981,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 4981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4981)
        0.03687593 = product of:
          0.07375186 = sum of:
            0.07375186 = weight(_text_:de in 4981) [ClassicSimilarity], result of:
              0.07375186 = score(doc=4981,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.37984797 = fieldWeight in 4981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4981)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Vgl.: http://www.ontoport.org/alfresco -> "My Home"; http://www.xing.com/t/de/ONTOLOGIEN
    Type
    a
  13. Finke, M.; Risch, J.: "Match Me If You Can" : Sammeln und semantisches Aufbereiten von Fußballdaten (2017) 0.03
    0.028123489 = product of:
      0.042185232 = sum of:
        0.0053093014 = weight(_text_:a in 3723) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=3723,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 3723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3723)
        0.03687593 = product of:
          0.07375186 = sum of:
            0.07375186 = weight(_text_:de in 3723) [ClassicSimilarity], result of:
              0.07375186 = score(doc=3723,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.37984797 = fieldWeight in 3723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3723)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Vgl.: www.info7.de/info7_2017-2_S-36-51.pdf.
    Type
    a
  14. Bruijn, J. de; Fensel, D.: Ontologies and their definition (2009) 0.03
    0.02687528 = product of:
      0.04031292 = sum of:
        0.008046483 = weight(_text_:a in 3792) [ClassicSimilarity], result of:
          0.008046483 = score(doc=3792,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1544581 = fieldWeight in 3792, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3792)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 3792) [ClassicSimilarity], result of:
              0.064532876 = score(doc=3792,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 3792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3792)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This entry introduces ontologies as a potential "silver bullet" for knowledge management, enterprise application integration, and e-commerce. Ontologies enable knowledge sharing and knowledge reuse. The degree to which an ontology is machine-understandable, its formality, is determined by the language used for the specification of the ontology. There exists a trade-off between the expressiveness of an ontology language and the modeling support it provides for the ontology developer. This entry also describes how different knowledge representation formalisms, together with the Web languages XML and RDF, have influenced the development of the Web ontology language OWL.
    Type
    a
  15. Gnoli, C.: Fundamentos ontológicos de la organización del conocimiento : la teoría de los niveles integrativos aplicada al orden de cita (2011) 0.03
    0.026295988 = product of:
      0.03944398 = sum of:
        0.0075084865 = weight(_text_:a in 2659) [ClassicSimilarity], result of:
          0.0075084865 = score(doc=2659,freq=16.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.14413087 = fieldWeight in 2659, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.031935494 = product of:
          0.06387099 = sum of:
            0.06387099 = weight(_text_:de in 2659) [ClassicSimilarity], result of:
              0.06387099 = score(doc=2659,freq=6.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.328958 = fieldWeight in 2659, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The field of knowledge organization (KO) can be described as composed of the four distinct but connected layers of theory, systems, representation, and application. This paper focuses on the relations between KO theory and KO systems. It is acknowledged how the structure of KO systems is the product of a mixture of ontological, epistemological, and pragmatical factors. However, different systems give different priorities to each factor. A more ontologically-oriented approach, though not offering quick solutions for any particular group of users, will produce systems of wide and long-lasting application as they are based on general, shareable principles. I take the case of the ontological theory of integrative levels, which has been considered as a useful source for general classifications for several decades, and is currently implemented in the Integrative Levels Classification system. The theory produces a sequence of main classes modelling a natural order between phenomena. This order has interesting effects also on other features of the system, like the citation order of concepts within compounds. As it has been shown by facet analytical theory, it is useful that citation order follow a principle of inversion, as compared to the order of the same concepts in the schedules. In the light of integrative levels theory, this principle also acquires an ontological meaning: phenomena of lower level should be cited first, as most often they act as specifications of higher-level ones. This ontological principle should be complemented by consideration of the epistemological treatment of phenomena: in case a lower-level phenomenon is the main theme, it can be promoted to the leading position in the compound subject heading. The integration of these principles is believed to produce optimal results in the ordering of knowledge contents.
    Type
    a
  16. De Maio, C.; Fenza, G.; Loia, V.; Senatore, S.: Hierarchical web resources retrieval by exploiting Fuzzy Formal Concept Analysis (2012) 0.02
    0.024940504 = product of:
      0.037410755 = sum of:
        0.009753809 = weight(_text_:a in 2737) [ClassicSimilarity], result of:
          0.009753809 = score(doc=2737,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 2737, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2737)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 2737) [ClassicSimilarity], result of:
              0.055313893 = score(doc=2737,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 2737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2737)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In recent years, knowledge structuring is assuming important roles in several real world applications such as decision support, cooperative problem solving, e-commerce, Semantic Web and, even in planning systems. Ontologies play an important role in supporting automated processes to access information and are at the core of new strategies for the development of knowledge-based systems. Yet, developing an ontology is a time-consuming task which often needs an accurate domain expertise to tackle structural and logical difficulties in the definition of concepts as well as conceivable relationships. This work presents an ontology-based retrieval approach, that supports data organization and visualization and provides a friendly navigation model. It exploits the fuzzy extension of the Formal Concept Analysis theory to elicit conceptualizations from datasets and generate a hierarchy-based representation of extracted knowledge. An intuitive graphical interface provides a multi-facets view of the built ontology. Through a transparent query-based retrieval, final users navigate across concepts, relations and population.
    Type
    a
  17. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.02482874 = product of:
      0.03724311 = sum of:
        0.0066366266 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=6089,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 6089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
        0.030606484 = product of:
          0.061212968 = sum of:
            0.061212968 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.061212968 = score(doc=6089,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    S.11-22
    Type
    a
  18. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.02482874 = product of:
      0.03724311 = sum of:
        0.0066366266 = weight(_text_:a in 4523) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=4523,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 4523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4523)
        0.030606484 = product of:
          0.061212968 = sum of:
            0.061212968 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.061212968 = score(doc=4523,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
    Type
    a
  19. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.02
    0.024076894 = product of:
      0.03611534 = sum of:
        0.014690801 = weight(_text_:a in 3694) [ClassicSimilarity], result of:
          0.014690801 = score(doc=3694,freq=20.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.28200063 = fieldWeight in 3694, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3694)
        0.02142454 = product of:
          0.04284908 = sum of:
            0.04284908 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.04284908 = score(doc=3694,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
    Type
    a
  20. Clark, M.; Kim, Y.; Kruschwitz, U.; Song, D.; Albakour, D.; Dignum, S.; Beresi, U.C.; Fasli, M.; Roeck, A De: Automatically structuring domain knowledge from text : an overview of current research (2012) 0.02
    0.023747265 = product of:
      0.035620898 = sum of:
        0.007963953 = weight(_text_:a in 2738) [ClassicSimilarity], result of:
          0.007963953 = score(doc=2738,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 2738, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2738)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 2738) [ClassicSimilarity], result of:
              0.055313893 = score(doc=2738,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 2738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents an overview of automatic methods for building domain knowledge structures (domain models) from text collections. Applications of domain models have a long history within knowledge engineering and artificial intelligence. In the last couple of decades they have surfaced noticeably as a useful tool within natural language processing, information retrieval and semantic web technology. Inspired by the ubiquitous propagation of domain model structures that are emerging in several research disciplines, we give an overview of the current research landscape and some techniques and approaches. We will also discuss trade-offs between different approaches and point to some recent trends.
    Type
    a

Years

Languages

  • e 332
  • d 76
  • pt 4
  • sp 1
  • More… Less…

Types

  • el 67
  • p 1
  • x 1
  • More… Less…