Search (9 results, page 1 of 1)

  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[2020 TO 2030}
  1. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.01
    0.010582941 = product of:
      0.02645735 = sum of:
        0.019456914 = weight(_text_:management in 179) [ClassicSimilarity], result of:
          0.019456914 = score(doc=179,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.14896142 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.007000436 = product of:
          0.021001307 = sum of:
            0.021001307 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.021001307 = score(doc=179,freq=2.0), product of:
                0.13570201 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038751747 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.4, S.671-685
  2. Sinha, P.K.; Dutta, B.: ¬A systematic analysis of flood ontologies : a parametric approach (2020) 0.00
    0.0048642284 = product of:
      0.02432114 = sum of:
        0.02432114 = weight(_text_:management in 5758) [ClassicSimilarity], result of:
          0.02432114 = score(doc=5758,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.18620178 = fieldWeight in 5758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5758)
      0.2 = coord(1/5)
    
    Abstract
    The article identifies the core literature available on flood ontologies and presents a review on these ontologies from various perspectives like its purpose, type, design methodologies, ontologies (re)used, and also their focus on specific flood disaster phases. The study was conducted in two stages: i) literature identification, where the systematic literature review methodology was employed; and, ii) ontological review, where the parametric approach was applied. The study resulted in a set of fourteen papers discussing the flood ontology (FO). The ontological review revealed that most of the flood ontologies were task ontologies, formal, modular, and used web ontology language (OWL) for their representation. The most (re)used ontologies were SWEET, SSN, Time, and Space. METHONTOLOGY was the preferred design methodology, and for evaluation, application-based or data-based approaches were preferred. The majority of the ontologies were built around the response phase of the disaster. The unavailability of the full ontologies somewhat restricted the current study as the structural ontology metrics are missing. But the scientific community, the developers, of flood disaster management systems can refer to this work for their research to see what is available in the literature on flood ontology and the other major domains essential in building the FO.
  3. Fagundes, P.B.; Freund, G.P.; Vital, L.P.; Monteiro de Barros, C.; Macedo, D.D.J.de: Taxonomias, ontologias e tesauros : possibilidades de contribuição para o processo de Engenharia de Requisitos (2020) 0.00
    0.0048642284 = product of:
      0.02432114 = sum of:
        0.02432114 = weight(_text_:management in 5828) [ClassicSimilarity], result of:
          0.02432114 = score(doc=5828,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.18620178 = fieldWeight in 5828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
      0.2 = coord(1/5)
    
    Abstract
    Some of the fundamental activities of the software development process are related to the discipline of Requirements Engineering, whose objective is the discovery, analysis, documentation and verification of the requirements that will be part of the system. Requirements are the conditions or capabilities that software must have or perform to meet the users needs. The present study is being developed to propose a model of cooperation between Information Science and Requirements Engineering. Aims to present the analysis results on the possibilities of using the knowledge organization systems: taxonomies, thesauri and ontologies during the activities of Requirements Engineering: design, survey, elaboration, negotiation, specification, validation and requirements management. From the results obtained it was possible to identify in which stage of the Requirements Engineering process, each type of knowledge organization system could be used. We expect that this study put in evidence the need for new researchs and proposals to strengt the exchange between Information Science, as a science that has information as object of study, and the Requirements Engineering which has in the information the raw material to identify the informational needs of software users.
  4. Favato Barcelos, P.P.; Sales, T.P.; Fumagalli, M.; Guizzardi, G.; Valle Sousa, I.; Fonseca, C.M.; Romanenko, E.; Kritz, J.: ¬A FAIR model catalog for ontology-driven conceptual modeling research (2022) 0.00
    0.0048642284 = product of:
      0.02432114 = sum of:
        0.02432114 = weight(_text_:management in 756) [ClassicSimilarity], result of:
          0.02432114 = score(doc=756,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.18620178 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=756)
      0.2 = coord(1/5)
    
    Abstract
    Conceptual models are artifacts representing conceptualizations of particular domains. Hence, multi-domain model catalogs serve as empirical sources of knowledge and insights about specific domains, about the use of a modeling language's constructs, as well as about the patterns and anti-patterns recurrent in the models of that language crosscutting different domains. However, to support domain and language learning, model reuse, knowledge discovery for humans, and reliable automated processing and analysis by machines, these catalogs must be built following generally accepted quality requirements for scientific data management. Especially, all scientific (meta)data-including models-should be created using the FAIR principles (Findability, Accessibility, Interoperability, and Reusability). In this paper, we report on the construction of a FAIR model catalog for Ontology-Driven Conceptual Modeling research, a trending paradigm lying at the intersection of conceptual modeling and ontology engineering in which the Unified Foundational Ontology (UFO) and OntoUML emerged among the most adopted technologies. In this initial release, the catalog includes over a hundred models, developed in a variety of contexts and domains. The paper also discusses the research implications for (ontology-driven) conceptual modeling of such a resource.
  5. Silva, S.E.; Reis, L.P.; Fernandes, J.M.; Sester Pereira, A.D.: ¬A multi-layer framework for semantic modeling (2020) 0.00
    0.0038913828 = product of:
      0.019456914 = sum of:
        0.019456914 = weight(_text_:management in 5712) [ClassicSimilarity], result of:
          0.019456914 = score(doc=5712,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.14896142 = fieldWeight in 5712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=5712)
      0.2 = coord(1/5)
    
    Abstract
    Purpose The purpose of this paper is to introduce a multi-level framework for semantic modeling (MFSM) based on four signification levels: objects, classes of entities, instances and domains. In addition, four fundamental propositions of the signification process underpin these levels, namely, classification, decomposition, instantiation and contextualization. Design/methodology/approach The deductive approach guided the design of this modeling framework. The authors empirically validated the MFSM in two ways. First, the authors identified the signification processes used in articles that deal with semantic modeling. The authors then applied the MFSM to model the semantic context of the literature about lean manufacturing, a field of management science. Findings The MFSM presents a highly consistent approach about the signification process, integrates the semantic modeling literature in a new and comprehensive view; and permits the modeling of any semantic context, thus facilitating the development of knowledge organization systems based on semantic search. Research limitations/implications The use of MFSM is manual and, thus, requires a considerable effort of the team that decides to model a semantic context. In this paper, the modeling was generated by specialists, and in the future should be applicated to lay users. Practical implications The MFSM opens up avenues to a new form of classification of documents, as well as for the development of tools based on the semantic search, and to investigate how users do their searches. Social implications The MFSM can be used to model archives semantically in public or private settings. In future, it can be incorporated to search engines for more efficient searches of users. Originality/value The MFSM provides a new and comprehensive approach about the elementary levels and activities in the process of signification. In addition, this new framework presents a new form to model semantically any context classifying its objects.
  6. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    0.0034049598 = product of:
      0.017024798 = sum of:
        0.017024798 = weight(_text_:management in 53) [ClassicSimilarity], result of:
          0.017024798 = score(doc=53,freq=2.0), product of:
            0.13061713 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.038751747 = queryNorm
            0.13034125 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.2 = coord(1/5)
    
    Content
    # Community action on all ontologies (quality, FAIRness, conformity) Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies. 1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia's CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de <mailto:hellmann@informatik.uni-leipzig.de>if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks.
  7. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.00
    0.0028001745 = product of:
      0.014000872 = sum of:
        0.014000872 = product of:
          0.042002615 = sum of:
            0.042002615 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.042002615 = score(doc=318,freq=2.0), product of:
                0.13570201 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038751747 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    22. 5.2021 12:43:05
  8. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.00
    0.0017659955 = product of:
      0.008829977 = sum of:
        0.008829977 = product of:
          0.02648993 = sum of:
            0.02648993 = weight(_text_:29 in 103) [ClassicSimilarity], result of:
              0.02648993 = score(doc=103,freq=2.0), product of:
                0.13631654 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038751747 = queryNorm
                0.19432661 = fieldWeight in 103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=103)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  9. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.00
    0.001750109 = product of:
      0.008750545 = sum of:
        0.008750545 = product of:
          0.026251635 = sum of:
            0.026251635 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.026251635 = score(doc=106,freq=2.0), product of:
                0.13570201 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038751747 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    22. 1.2021 14:24:32