Search (49 results, page 1 of 3)

  • × year_i:[2020 TO 2030}
  • × theme_ss:"Wissensrepräsentation"
  1. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.06
    0.061151218 = product of:
      0.091726825 = sum of:
        0.021102862 = weight(_text_:of in 179) [ClassicSimilarity], result of:
          0.021102862 = score(doc=179,freq=28.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.25858206 = fieldWeight in 179, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.070623964 = sum of:
          0.04234075 = weight(_text_:science in 179) [ClassicSimilarity], result of:
            0.04234075 = score(doc=179,freq=14.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.30799913 = fieldWeight in 179, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.028283209 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.028283209 = score(doc=179,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Showcasing Doctoral Research in Information Science.
    Source
    Aslib journal of information management. 72(2020) no.4, S.671-685
  2. Meng, K.; Ba, Z.; Ma, Y.; Li, G.: ¬A network coupling approach to detecting hierarchical linkages between science and technology (2024) 0.03
    0.031955566 = product of:
      0.047933348 = sum of:
        0.023928396 = weight(_text_:of in 1205) [ClassicSimilarity], result of:
          0.023928396 = score(doc=1205,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2932045 = fieldWeight in 1205, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
        0.02400495 = product of:
          0.0480099 = sum of:
            0.0480099 = weight(_text_:science in 1205) [ClassicSimilarity], result of:
              0.0480099 = score(doc=1205,freq=8.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.34923816 = fieldWeight in 1205, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1205)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Detecting science-technology hierarchical linkages is beneficial for understanding deep interactions between science and technology (S&T). Previous studies have mainly focused on linear linkages between S&T but ignored their structural linkages. In this paper, we propose a network coupling approach to inspect hierarchical interactions of S&T by integrating their knowledge linkages and structural linkages. S&T knowledge networks are first enhanced with bidirectional encoder representation from transformers (BERT) knowledge alignment, and then their hierarchical structures are identified based on K-core decomposition. Hierarchical coupling preferences and strengths of the S&T networks over time are further calculated based on similarities of coupling nodes' degree distribution and similarities of coupling edges' weight distribution. Extensive experimental results indicate that our approach is feasible and robust in identifying the coupling hierarchy with superior performance compared to other isomorphism and dissimilarity algorithms. Our research extends the mindset of S&T linkage measurement by identifying patterns and paths of the interaction of S&T hierarchical knowledge.
    Source
    Journal of the Association for Information Science and Technology. 75(2023) no.2, S.167-187
  3. Broughton, V.: Science and knowledge organization : an editorial (2021) 0.03
    0.031489722 = product of:
      0.04723458 = sum of:
        0.029910497 = weight(_text_:of in 593) [ClassicSimilarity], result of:
          0.029910497 = score(doc=593,freq=36.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.36650562 = fieldWeight in 593, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=593)
        0.01732408 = product of:
          0.03464816 = sum of:
            0.03464816 = weight(_text_:science in 593) [ClassicSimilarity], result of:
              0.03464816 = score(doc=593,freq=6.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.25204095 = fieldWeight in 593, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=593)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The purpose of this article is to identify the most important factors and features in the evolution of thesauri and ontologies through a dialectic model. This model relies on a dialectic process or idea which could be discovered via a dialectic method. This method has focused on identifying the logical relationship between a beginning proposition, or an idea called a thesis, a negation of that idea called the antithesis, and the result of the conflict between the two ideas, called a synthesis. During the creation of knowl­edge organization systems (KOSs), the identification of logical relations between different ideas has been made possible through the consideration and use of the most influential methods and tools such as dictionaries, Roget's Thesaurus, thesaurus, micro-, macro- and metathesauri, ontology, lower, middle and upper level ontologies. The analysis process has adapted a historical methodology, more specifically a dialectic method and documentary method as the reasoning process. This supports our arguments and synthesizes a method for the analysis of research results. Confirmed by the research results, the principle of unity has shown to be the most important factor in the development and evolution of the structure of knowl­edge organization systems and their types. There are various types of unity when considering the analysis of logical relations. These include the principle of unity of alphabetical order, unity of science, semantic unity, structural unity and conceptual unity. The results have clearly demonstrated a movement from plurality to unity in the assembling of the complex structure of knowl­edge organization systems to increase information and knowl­edge storage and retrieval performance.
    Footnote
    Editorial zu einem Special issue on 'Science and knowledge organization' mit längeren Überblicken zu wichtigen Begriffen der Wissensorgansiation.
  4. Oliveira Machado, L.M.; Almeida, M.B.; Souza, R.R.: What researchers are currently saying about ontologies : a review of recent Web of Science articles (2020) 0.03
    0.030349312 = product of:
      0.045523968 = sum of:
        0.028199887 = weight(_text_:of in 5881) [ClassicSimilarity], result of:
          0.028199887 = score(doc=5881,freq=32.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34554482 = fieldWeight in 5881, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5881)
        0.01732408 = product of:
          0.03464816 = sum of:
            0.03464816 = weight(_text_:science in 5881) [ClassicSimilarity], result of:
              0.03464816 = score(doc=5881,freq=6.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.25204095 = fieldWeight in 5881, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5881)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Traditionally connected to philosophy, the term ontology is increasingly related to information systems areas. Some researchers consider the approaches of the two disciplinary contexts to be completely different. Others consider that, although different, they should talk to each other, as both seek to answer similar questions. With the extensive literature on this topic, we intend to contribute to the understanding of the use of the term ontology in current research and which references support this use. An exploratory study was developed with a mixed methodology and a sample collected from the Web of Science of articles publishe in 2018. The results show the current prevalence of computer science in studies related to ontology and also of Gruber's view suggesting ontology as kind of conceptualization, a dominant view in that field. Some researchers, particularly in the field of biomedicine, do not adhere to this dominant view but to another one that seems closer to ontological study in the philosophical context. The term ontology, in the context of information systems, appears to be consolidating with a meaning different from the original, presenting traces of the process of "metaphorization" in the transfer of the term between the two fields of study.
  5. Fagundes, P.B.; Freund, G.P.; Vital, L.P.; Monteiro de Barros, C.; Macedo, D.D.J.de: Taxonomias, ontologias e tesauros : possibilidades de contribuição para o processo de Engenharia de Requisitos (2020) 0.03
    0.029135108 = product of:
      0.043702662 = sum of:
        0.02637858 = weight(_text_:of in 5828) [ClassicSimilarity], result of:
          0.02637858 = score(doc=5828,freq=28.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.32322758 = fieldWeight in 5828, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
        0.01732408 = product of:
          0.03464816 = sum of:
            0.03464816 = weight(_text_:science in 5828) [ClassicSimilarity], result of:
              0.03464816 = score(doc=5828,freq=6.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.25204095 = fieldWeight in 5828, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5828)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Some of the fundamental activities of the software development process are related to the discipline of Requirements Engineering, whose objective is the discovery, analysis, documentation and verification of the requirements that will be part of the system. Requirements are the conditions or capabilities that software must have or perform to meet the users needs. The present study is being developed to propose a model of cooperation between Information Science and Requirements Engineering. Aims to present the analysis results on the possibilities of using the knowledge organization systems: taxonomies, thesauri and ontologies during the activities of Requirements Engineering: design, survey, elaboration, negotiation, specification, validation and requirements management. From the results obtained it was possible to identify in which stage of the Requirements Engineering process, each type of knowledge organization system could be used. We expect that this study put in evidence the need for new researchs and proposals to strengt the exchange between Information Science, as a science that has information as object of study, and the Requirements Engineering which has in the information the raw material to identify the informational needs of software users.
    Footnote
    Engl. Übers. des Titels: Taxonomies, ontologies and thesauri: possibilities of contribution to the process of Requirements Engineering.
  6. Buente, W.; Baybayan, C.K.; Hajibayova, L.; McCorkhill, M.; Panchyshyn, R.: Exploring the renaissance of wayfinding and voyaging through the lens of knowledge representation, organization and discovery systems (2020) 0.03
    0.028808555 = product of:
      0.04321283 = sum of:
        0.029067779 = weight(_text_:of in 105) [ClassicSimilarity], result of:
          0.029067779 = score(doc=105,freq=34.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.35617945 = fieldWeight in 105, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=105)
        0.014145052 = product of:
          0.028290104 = sum of:
            0.028290104 = weight(_text_:science in 105) [ClassicSimilarity], result of:
              0.028290104 = score(doc=105,freq=4.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.20579056 = fieldWeight in 105, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=105)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The purpose of this paper is to provide a critical analysis from an ethical perspective of how the concept of indigenous wayfinding and voyaging is mapped in knowledge representation, organization and discovery systems. Design/methodology/approach In this study, the Dewey Decimal Classification, the Library of Congress Subject Headings, the Library of Congress Classifications systems and the Web of Science citation database were methodically examined to determine how these systems represent and facilitate the discovery of indigenous knowledge of wayfinding and voyaging. Findings The analysis revealed that there was no dedicated representation of the indigenous practices of wayfinding and voyaging in the major knowledge representation, organization and discovery systems. By scattering indigenous practice across various, often very broad and unrelated classes, coherence in the record is disrupted, resulting in misrepresentation of these indigenous concepts. Originality/value This study contributes to a relatively limited research literature on representation and organization of indigenous knowledge of wayfinding and voyaging. This study calls to foster a better understanding and appreciation for the rich knowledge that indigenous cultures provide for an enlightened society.
    Object
    Web of Science
    Source
    Journal of documentation. 76(2020) no.6, S.1279-1293
  7. Zhou, H.; Guns, R.; Engels, T.C.E.: Towards indicating interdisciplinarity : characterizing interdisciplinary knowledge flow (2023) 0.03
    0.028235972 = product of:
      0.042353958 = sum of:
        0.025379896 = weight(_text_:of in 1072) [ClassicSimilarity], result of:
          0.025379896 = score(doc=1072,freq=18.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.3109903 = fieldWeight in 1072, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1072)
        0.016974064 = product of:
          0.033948127 = sum of:
            0.033948127 = weight(_text_:science in 1072) [ClassicSimilarity], result of:
              0.033948127 = score(doc=1072,freq=4.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.24694869 = fieldWeight in 1072, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1072)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study contributes to the recent discussions on indicating interdisciplinarity, that is, going beyond catch-all metrics of interdisciplinarity. We propose a contextual framework to improve the granularity and usability of the existing methodology for interdisciplinary knowledge flow (IKF) in which scientific disciplines import and export knowledge from/to other disciplines. To characterize the knowledge exchange between disciplines, we recognize three aspects of IKF under this framework, namely broadness, intensity, and homogeneity. We show how to utilize them to uncover different forms of interdisciplinarity, especially between disciplines with the largest volume of IKF. We apply this framework in two use cases, one at the level of disciplines and one at the level of journals, to show how it can offer a more holistic and detailed viewpoint on the interdisciplinarity of scientific entities than aggregated and context-unaware indicators. We further compare our proposed framework, an indicating process, with established indicators and discuss how such information tools on interdisciplinarity can assist science policy practices such as performance-based research funding systems and panel-based peer review processes.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.11, S.1325-1340
  8. Jiang, Y.-C.; Li, H.: ¬The theoretical basis and basic principles of knowledge network construction in digital library (2023) 0.03
    0.027539104 = product of:
      0.041308656 = sum of:
        0.029306183 = weight(_text_:of in 1130) [ClassicSimilarity], result of:
          0.029306183 = score(doc=1130,freq=24.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.3591007 = fieldWeight in 1130, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1130)
        0.012002475 = product of:
          0.02400495 = sum of:
            0.02400495 = weight(_text_:science in 1130) [ClassicSimilarity], result of:
              0.02400495 = score(doc=1130,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.17461908 = fieldWeight in 1130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1130)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Knowledge network construction (KNC) is the essence of dynamic knowledge architecture, and is helpful to illustrate ubiquitous knowledge service in digital libraries (DLs). The authors explore its theoretical foundations and basic rules to elucidate the basic principles of KNC in DLs. The results indicate that world general connection, small-world phenomenon, relevance theory, unity and continuity of science development have been the production tool, architecture aim and scientific foundation of KNC in DLs. By analyzing both the characteristics of KNC based on different types of knowledge linking and the relationships between different forms of knowledge and the appropriate ways of knowledge linking, the basic principle of KNC is summarized as follows: let each kind of knowledge linking form each shows its ability, each kind of knowledge manifestation each answer the purpose intended in practice, and then subjective knowledge network and objective knowledge network are organically combined. This will lay a solid theoretical foundation and provide an action guide for DLs to construct knowledge networks.
  9. Amirhosseini, M.; Avidan, G.: ¬A dialectic perspective on the evolution of thesauri and ontologies (2021) 0.03
    0.027154785 = product of:
      0.040732175 = sum of:
        0.030730115 = weight(_text_:of in 592) [ClassicSimilarity], result of:
          0.030730115 = score(doc=592,freq=38.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.37654874 = fieldWeight in 592, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=592)
        0.010002062 = product of:
          0.020004123 = sum of:
            0.020004123 = weight(_text_:science in 592) [ClassicSimilarity], result of:
              0.020004123 = score(doc=592,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.1455159 = fieldWeight in 592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=592)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The purpose of this article is to identify the most important factors and features in the evolution of thesauri and ontologies through a dialectic model. This model relies on a dialectic process or idea which could be discovered via a dialectic method. This method has focused on identifying the logical relationship between a beginning proposition, or an idea called a thesis, a negation of that idea called the antithesis, and the result of the conflict between the two ideas, called a synthesis. During the creation of knowl­edge organization systems (KOSs), the identification of logical relations between different ideas has been made possible through the consideration and use of the most influential methods and tools such as dictionaries, Roget's Thesaurus, thesaurus, micro-, macro- and metathesauri, ontology, lower, middle and upper level ontologies. The analysis process has adapted a historical methodology, more specifically a dialectic method and documentary method as the reasoning process. This supports our arguments and synthesizes a method for the analysis of research results. Confirmed by the research results, the principle of unity has shown to be the most important factor in the development and evolution of the structure of knowl­edge organization systems and their types. There are various types of unity when considering the analysis of logical relations. These include the principle of unity of alphabetical order, unity of science, semantic unity, structural unity and conceptual unity. The results have clearly demonstrated a movement from plurality to unity in the assembling of the complex structure of knowl­edge organization systems to increase information and knowl­edge storage and retrieval performance.
  10. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.03
    0.02670734 = product of:
      0.04006101 = sum of:
        0.028058534 = weight(_text_:of in 5365) [ClassicSimilarity], result of:
          0.028058534 = score(doc=5365,freq=22.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34381276 = fieldWeight in 5365, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.012002475 = product of:
          0.02400495 = sum of:
            0.02400495 = weight(_text_:science in 5365) [ClassicSimilarity], result of:
              0.02400495 = score(doc=5365,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.17461908 = fieldWeight in 5365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  11. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.03
    0.026375443 = product of:
      0.039563164 = sum of:
        0.011279955 = weight(_text_:of in 318) [ClassicSimilarity], result of:
          0.011279955 = score(doc=318,freq=2.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.13821793 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.028283209 = product of:
          0.056566417 = sum of:
            0.056566417 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.056566417 = score(doc=318,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In der Session "Knowledge Representation" auf der ISI 2021 wurden unter der Moderation von Jürgen Reischer (Uni Regensburg) drei Projekte vorgestellt, in denen Knowledge Representation mit RDF umgesetzt wird. Die Domänen sind erfreulich unterschiedlich, die gemeinsame Klammer indes ist die Absicht, den Zugang zu Forschungsdaten zu verbessern: - Japanese Visual Media Graph - Taxonomy of Digital Research Activities in the Humanities - Forschungsdaten im konzeptuellen Modell von FRBR
    Date
    22. 5.2021 12:43:05
  12. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.03
    0.025884613 = product of:
      0.03882692 = sum of:
        0.021149913 = weight(_text_:of in 106) [ClassicSimilarity], result of:
          0.021149913 = score(doc=106,freq=18.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.25915858 = fieldWeight in 106, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.017677005 = product of:
          0.03535401 = sum of:
            0.03535401 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.03535401 = score(doc=106,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
    Source
    Journal of documentation. 77(2021) no.1, S.93-105
  13. Ghosh, S.S.; Das, S.; Chatterjee, S.K.: Human-centric faceted approach for ontology construction (2020) 0.03
    0.025018109 = product of:
      0.037527163 = sum of:
        0.02338211 = weight(_text_:of in 5731) [ClassicSimilarity], result of:
          0.02338211 = score(doc=5731,freq=22.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.28651062 = fieldWeight in 5731, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5731)
        0.014145052 = product of:
          0.028290104 = sum of:
            0.028290104 = weight(_text_:science in 5731) [ClassicSimilarity], result of:
              0.028290104 = score(doc=5731,freq=4.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.20579056 = fieldWeight in 5731, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5731)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this paper, we propose an ontology building method, called human-centric faceted approach for ontology construction (HCFOC). HCFOC uses the human-centric approach, improvised with the idea of selective dissemination of information (SDI), to deal with context. Further, this ontology construction process makes use of facet analysis and an analytico-synthetic classification approach. This novel fusion contributes to the originality of HCFOC and distinguishes it from other existing ontology construction methodologies. Based on HCFOC, an ontology of the tourism domain has been designed using the Protégé-5.5.0 ontology editor. The HCFOC methodology has provided the necessary flexibility, extensibility, robustness and has facilitated the capturing of background knowledge. It models the tourism ontology in such a way that it is able to deal with the context of a tourist's information need with precision. This is evident from the result that more than 90% of the user's queries were successfully met. The use of domain knowledge and techniques from both library and information science and computer science has helped in the realization of the desired purpose of this ontology construction process. It is envisaged that HCFOC will have implications for ontology developers. The demonstrated tourism ontology can support any tourism information retrieval system.
  14. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.02
    0.023953915 = product of:
      0.035930872 = sum of:
        0.023928396 = weight(_text_:of in 976) [ClassicSimilarity], result of:
          0.023928396 = score(doc=976,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2932045 = fieldWeight in 976, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=976)
        0.012002475 = product of:
          0.02400495 = sum of:
            0.02400495 = weight(_text_:science in 976) [ClassicSimilarity], result of:
              0.02400495 = score(doc=976,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.17461908 = fieldWeight in 976, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=976)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.
  15. Almeida, M.B.; Felipe, E.R.; Barcelos, R.: Toward a document-centered ontological theory for information architecture in corporations (2020) 0.02
    0.023529977 = product of:
      0.035294965 = sum of:
        0.021149913 = weight(_text_:of in 8) [ClassicSimilarity], result of:
          0.021149913 = score(doc=8,freq=18.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.25915858 = fieldWeight in 8, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=8)
        0.014145052 = product of:
          0.028290104 = sum of:
            0.028290104 = weight(_text_:science in 8) [ClassicSimilarity], result of:
              0.028290104 = score(doc=8,freq=4.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.20579056 = fieldWeight in 8, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=8)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The beginning of the 21st century attested to the first movements toward information architecture (IA), originating from the field of library and information science (LIS). IA is acknowledged as an important meta-discipline concerned with the design, implementation, and maintenance of digital information spaces. Despite the relevance of IA, there is little research about the subject within LIS, and still less if one considers initiatives for creating a theory for IA. In this article, we provide a theory for IA and describe the resources needed to create it through ontological models. We also choose the "document" as the key entity for such theory, contemplating kinds of documents that not only serve to register information, but also create claims and obligations in society. To achieve our goals, we provide a background for subtheories from LIS and from Applied Ontology. As a result, we present some basic theory for IA in the form of a formal framework to represent corporations in which IA activities take place, acknowledging that our approach is de facto a subset of IA we call the enterprise information architecture (EAI) approach. By doing this, we highlight the effects that documents cause within corporations in the scope of EIA.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.11, S.1308-1326
  16. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.02
    0.021286696 = product of:
      0.031930044 = sum of:
        0.023928396 = weight(_text_:of in 1004) [ClassicSimilarity], result of:
          0.023928396 = score(doc=1004,freq=36.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2932045 = fieldWeight in 1004, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
        0.00800165 = product of:
          0.0160033 = sum of:
            0.0160033 = weight(_text_:science in 1004) [ClassicSimilarity], result of:
              0.0160033 = score(doc=1004,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.11641272 = fieldWeight in 1004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  17. Silva, S.E.; Reis, L.P.; Fernandes, J.M.; Sester Pereira, A.D.: ¬A multi-layer framework for semantic modeling (2020) 0.02
    0.020374373 = product of:
      0.030561559 = sum of:
        0.02255991 = weight(_text_:of in 5712) [ClassicSimilarity], result of:
          0.02255991 = score(doc=5712,freq=32.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.27643585 = fieldWeight in 5712, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5712)
        0.00800165 = product of:
          0.0160033 = sum of:
            0.0160033 = weight(_text_:science in 5712) [ClassicSimilarity], result of:
              0.0160033 = score(doc=5712,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.11641272 = fieldWeight in 5712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5712)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to introduce a multi-level framework for semantic modeling (MFSM) based on four signification levels: objects, classes of entities, instances and domains. In addition, four fundamental propositions of the signification process underpin these levels, namely, classification, decomposition, instantiation and contextualization. Design/methodology/approach The deductive approach guided the design of this modeling framework. The authors empirically validated the MFSM in two ways. First, the authors identified the signification processes used in articles that deal with semantic modeling. The authors then applied the MFSM to model the semantic context of the literature about lean manufacturing, a field of management science. Findings The MFSM presents a highly consistent approach about the signification process, integrates the semantic modeling literature in a new and comprehensive view; and permits the modeling of any semantic context, thus facilitating the development of knowledge organization systems based on semantic search. Research limitations/implications The use of MFSM is manual and, thus, requires a considerable effort of the team that decides to model a semantic context. In this paper, the modeling was generated by specialists, and in the future should be applicated to lay users. Practical implications The MFSM opens up avenues to a new form of classification of documents, as well as for the development of tools based on the semantic search, and to investigate how users do their searches. Social implications The MFSM can be used to model archives semantically in public or private settings. In future, it can be incorporated to search engines for more efficient searches of users. Originality/value The MFSM provides a new and comprehensive approach about the elementary levels and activities in the process of signification. In addition, this new framework presents a new form to model semantically any context classifying its objects.
    Source
    Journal of documentation. 76(2020) no.2, S.502-530
  18. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.0131599475 = product of:
      0.03947984 = sum of:
        0.03947984 = weight(_text_:of in 572) [ClassicSimilarity], result of:
          0.03947984 = score(doc=572,freq=32.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.48376274 = fieldWeight in 572, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.33333334 = coord(1/3)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  19. Hudon, M.: Facet (2020) 0.01
    0.010911653 = product of:
      0.032734957 = sum of:
        0.032734957 = weight(_text_:of in 5899) [ClassicSimilarity], result of:
          0.032734957 = score(doc=5899,freq=22.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.40111488 = fieldWeight in 5899, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5899)
      0.33333334 = coord(1/3)
    
    Abstract
    S.R. Ranganathan is credited with the introduction of the term "facet" in the field of knowledge organization towards the middle of the twentieth century. Facets have traditionally been used to organize document collections and to express complex subjects. In the digital world, they act as filters to facilitate navigation and improve retrieval. But the popularity of the term does not mean that a definitive characterization of the concept has been established. Indeed, several conceptualizations of the facet co-exist. This article provides an overview of formal and informal definitions found in the literature of knowledge organization, followed by a discussion of four common conceptualizations of the facet: process vs product, nature vs function, object vs subject and organization vs navigation.
    Series
    Reviews of concepts in knowledge organization
  20. Simoes, G.; Machado, L.; Gnoli, C.; Souza, R.: Can an ontologically-oriented KO do without concepts? (2020) 0.01
    0.010551432 = product of:
      0.031654295 = sum of:
        0.031654295 = weight(_text_:of in 4964) [ClassicSimilarity], result of:
          0.031654295 = score(doc=4964,freq=28.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.38787308 = fieldWeight in 4964, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4964)
      0.33333334 = coord(1/3)
    
    Abstract
    The ontological approach in the development of KOS is an attempt to overcome the limitations of the traditional epistemological approach. Questions raise about the representation and organization of ontologically-oriented KO units, such as BFO universals or ILC phenomena. The study aims to compare the ontological approaches of BFO and ILC using a hermeneutic approach. We found that the differences between the units of the two systems are primarily due to the formal level of abstraction of BFO and the different organizations, namely the grouping of phenomena into ILC classes that represent complex compounds of entities in the BFO approach. In both systems the use of concepts is considered instrumental, although in the ILC they constitute the intersubjective component of the phenomena whereas in BFO they serve to access the entities of reality but are not part of them.
    Source
    Knowledge Organization at the Interface. Proceedings of the Sixteenth International ISKO Conference, 2020 Aalborg, Denmark. Ed.: M. Lykke et al

Authors

Languages

  • e 43
  • pt 4
  • d 1
  • More… Less…

Types

  • a 44
  • el 13
  • p 4
  • A 1
  • EL 1
  • More… Less…