Search (49 results, page 1 of 3)

  • × year_i:[2020 TO 2030}
  • × theme_ss:"Wissensrepräsentation"
  1. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.03
    0.03237104 = product of:
      0.06474208 = sum of:
        0.014278769 = weight(_text_:in in 318) [ClassicSimilarity], result of:
          0.014278769 = score(doc=318,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 318, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.026805183 = weight(_text_:und in 318) [ClassicSimilarity], result of:
          0.026805183 = score(doc=318,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.27704588 = fieldWeight in 318, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.02365813 = product of:
          0.04731626 = sum of:
            0.04731626 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.04731626 = score(doc=318,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    In der Session "Knowledge Representation" auf der ISI 2021 wurden unter der Moderation von Jürgen Reischer (Uni Regensburg) drei Projekte vorgestellt, in denen Knowledge Representation mit RDF umgesetzt wird. Die Domänen sind erfreulich unterschiedlich, die gemeinsame Klammer indes ist die Absicht, den Zugang zu Forschungsdaten zu verbessern: - Japanese Visual Media Graph - Taxonomy of Digital Research Activities in the Humanities - Forschungsdaten im konzeptuellen Modell von FRBR
    Date
    22. 5.2021 12:43:05
  2. Auer, S.; Sens, I.; Stocker, M.: Erschließung wissenschaftlicher Literatur mit dem Open Research Knowledge Graph (2020) 0.02
    0.01673975 = product of:
      0.05021925 = sum of:
        0.0075724614 = weight(_text_:in in 551) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=551,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 551, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=551)
        0.042646788 = weight(_text_:und in 551) [ClassicSimilarity], result of:
          0.042646788 = score(doc=551,freq=18.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.4407773 = fieldWeight in 551, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=551)
      0.33333334 = coord(2/6)
    
    Abstract
    Die Weitergabe von Wissen hat sich seit vielen hundert Jahren nicht grundlegend verändert: Sie erfolgt in der Regel dokumentenbasiert - früher als klassischer Aufsatz auf Papier gedruckt, heute als PDF. Bei jährlich rund 2,5 Millionen neuen Forschungsbeiträgen ertrinken die Forschenden in einer Flut pseudodigitalisierter PDF-Publikationen. Die Folge: Die Forschung wird ernsthaft geschwächt. Denn viele Forschungsergebnisse können durch andere nicht reproduziert werden, es gibt mehr und mehr Redundanzen und das Meer von Publikationen ist unübersichtlich. Deshalb denkt die TIB - Leibniz-Informationszentrum Technik und Naturwissenschaften Wissenskommunikation neu: Statt auf statische PDF-Artikel setzt die TIB auf Wissensgraphen. Sie arbeitet daran, Wissen unterschiedlichster Form - Texte, Bilder, Grafiken, Audio- und Video-Dateien, 3D-Modelle und vieles mehr - intuitiv mithilfe dynamischer Wissensgraphen zu vernetzen. Der Wissensgraph soll verschiedene Forschungsideen, -ansätze, -methoden und -ergebnisse maschinenlesbar darstellen, sodass völlig neue Zusammenhänge von Wissen zutage treten und zur Lösung globaler Probleme beitragen könnten. Die großen gesellschaftlichen Herausforderungen verlangen Interdisziplinarität und das Zusammenfügen von Erkenntnis-Einzelteilen. Mit dem Wissensgraphen kann das gelingen und der Fluss wissenschaftlicher Erkenntnisse revolutioniert werden.
  3. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.009692928 = product of:
      0.029078782 = sum of:
        0.012493922 = weight(_text_:in in 572) [ClassicSimilarity], result of:
          0.012493922 = score(doc=572,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 572, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.01658486 = weight(_text_:und in 572) [ClassicSimilarity], result of:
          0.01658486 = score(doc=572,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.33333334 = coord(2/6)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.01
    0.007889465 = product of:
      0.023668395 = sum of:
        0.01183933 = weight(_text_:in in 179) [ClassicSimilarity], result of:
          0.01183933 = score(doc=179,freq=22.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19937998 = fieldWeight in 179, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.011829065 = product of:
          0.02365813 = sum of:
            0.02365813 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.02365813 = score(doc=179,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Showcasing Doctoral Research in Information Science.
  5. Auer, S.; Oelen, A.; Haris, A.M.; Stocker, M.; D'Souza, J.; Farfar, K.E.; Vogt, L.; Prinz, M.; Wiens, V.; Jaradeh, M.Y.: Improving access to scientific literature with knowledge graphs : an experiment using library guidelines to judge information integrity (2020) 0.01
    0.0069235205 = product of:
      0.020770561 = sum of:
        0.008924231 = weight(_text_:in in 316) [ClassicSimilarity], result of:
          0.008924231 = score(doc=316,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 316, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=316)
        0.01184633 = weight(_text_:und in 316) [ClassicSimilarity], result of:
          0.01184633 = score(doc=316,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.12243814 = fieldWeight in 316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=316)
      0.33333334 = coord(2/6)
    
    Abstract
    The transfer of knowledge has not changed fundamentally for many hundreds of years: It is usually document-based-formerly printed on paper as a classic essay and nowadays as PDF. With around 2.5 million new research contributions every year, researchers drown in a flood of pseudo-digitized PDF publications. As a result research is seriously weakened. In this article, we argue for representing scholarly contributions in a structured and semantic way as a knowledge graph. The advantage is that information represented in a knowledge graph is readable by machines and humans. As an example, we give an overview on the Open Research Knowledge Graph (ORKG), a service implementing this approach. For creating the knowledge graph representation, we rely on a mixture of manual (crowd/expert sourcing) and (semi-)automated techniques. Only with such a combination of human and machine intelligence, we can achieve the required quality of the representation to allow for novel exploration and assistance services for researchers. As a result, a scholarly knowledge graph such as the ORKG can be used to give a condensed overview on the state-of-the-art addressing a particular research quest, for example as a tabular comparison of contributions according to various characteristics of the approaches. Further possible intuitive access interfaces to such scholarly knowledge graphs include domain-specific (chart) visualizations or answering of natural language questions.
    Source
    Bibliothek: Forschung und Praxis. 44(2020) H.3, S.516-529
  6. Fagundes, P.B.; Freund, G.P.; Vital, L.P.; Monteiro de Barros, C.; Macedo, D.D.J.de: Taxonomias, ontologias e tesauros : possibilidades de contribuição para o processo de Engenharia de Requisitos (2020) 0.01
    0.00652498 = product of:
      0.01957494 = sum of:
        0.007728611 = weight(_text_:in in 5828) [ClassicSimilarity], result of:
          0.007728611 = score(doc=5828,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 5828, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
        0.01184633 = weight(_text_:und in 5828) [ClassicSimilarity], result of:
          0.01184633 = score(doc=5828,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.12243814 = fieldWeight in 5828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
      0.33333334 = coord(2/6)
    
    Abstract
    Some of the fundamental activities of the software development process are related to the discipline of Requirements Engineering, whose objective is the discovery, analysis, documentation and verification of the requirements that will be part of the system. Requirements are the conditions or capabilities that software must have or perform to meet the users needs. The present study is being developed to propose a model of cooperation between Information Science and Requirements Engineering. Aims to present the analysis results on the possibilities of using the knowledge organization systems: taxonomies, thesauri and ontologies during the activities of Requirements Engineering: design, survey, elaboration, negotiation, specification, validation and requirements management. From the results obtained it was possible to identify in which stage of the Requirements Engineering process, each type of knowledge organization system could be used. We expect that this study put in evidence the need for new researchs and proposals to strengt the exchange between Information Science, as a science that has information as object of study, and the Requirements Engineering which has in the information the raw material to identify the informational needs of software users.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Amirhosseini, M.; Avidan, G.: ¬A dialectic perspective on the evolution of thesauri and ontologies (2021) 0.01
    0.00652498 = product of:
      0.01957494 = sum of:
        0.007728611 = weight(_text_:in in 592) [ClassicSimilarity], result of:
          0.007728611 = score(doc=592,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 592, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=592)
        0.01184633 = weight(_text_:und in 592) [ClassicSimilarity], result of:
          0.01184633 = score(doc=592,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.12243814 = fieldWeight in 592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=592)
      0.33333334 = coord(2/6)
    
    Abstract
    The purpose of this article is to identify the most important factors and features in the evolution of thesauri and ontologies through a dialectic model. This model relies on a dialectic process or idea which could be discovered via a dialectic method. This method has focused on identifying the logical relationship between a beginning proposition, or an idea called a thesis, a negation of that idea called the antithesis, and the result of the conflict between the two ideas, called a synthesis. During the creation of knowl­edge organization systems (KOSs), the identification of logical relations between different ideas has been made possible through the consideration and use of the most influential methods and tools such as dictionaries, Roget's Thesaurus, thesaurus, micro-, macro- and metathesauri, ontology, lower, middle and upper level ontologies. The analysis process has adapted a historical methodology, more specifically a dialectic method and documentary method as the reasoning process. This supports our arguments and synthesizes a method for the analysis of research results. Confirmed by the research results, the principle of unity has shown to be the most important factor in the development and evolution of the structure of knowl­edge organization systems and their types. There are various types of unity when considering the analysis of logical relations. These include the principle of unity of alphabetical order, unity of science, semantic unity, structural unity and conceptual unity. The results have clearly demonstrated a movement from plurality to unity in the assembling of the complex structure of knowl­edge organization systems to increase information and knowl­edge storage and retrieval performance.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.01
    0.0064161494 = product of:
      0.019248448 = sum of:
        0.0044621155 = weight(_text_:in in 106) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=106,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 106, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.029572664 = score(doc=106,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  9. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.00
    0.0028220895 = product of:
      0.016932536 = sum of:
        0.016932536 = weight(_text_:in in 976) [ClassicSimilarity], result of:
          0.016932536 = score(doc=976,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.28515202 = fieldWeight in 976, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=976)
      0.16666667 = coord(1/6)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.
  10. Zhitomirsky-Geffet, M.; Avidan, G.: ¬A new framework for systematic analysis and classification of inconsistencies in multi-viewpoint ontologies (2021) 0.00
    0.0025241538 = product of:
      0.015144923 = sum of:
        0.015144923 = weight(_text_:in in 589) [ClassicSimilarity], result of:
          0.015144923 = score(doc=589,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.25504774 = fieldWeight in 589, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=589)
      0.16666667 = coord(1/6)
    
    Abstract
    Plurality of beliefs and theories in different knowledge domains calls for modelling multi-viewpoint ontologies and knowledge organization systems (KOS). A generic theoretical approach recently proposed for heterogeneity representation in KOS was linking each ontological statement to a specific validity scope to determine a set of conditions under which the statement is valid. However, the practical applicability of this approach has yet to be empirically assessed. In addition, there is still a need to investigate the types of inconsistencies that might arise in multi-viewpoint ontologies as well as their possible causes. This study proposes a new framework for systematic analysis and classification of inconsistencies in multi-viewpoint ontologies. The framework is based on eight generic logical structures of ontological statements. To test the validity of the proposed framework, two ontologies from different knowledge domains were examined. We found that only three of the eight structures led to inconsistencies in both ontologies, while the other two structures were always present in logically consistent statements. The study has practical implications for building diversified and personalized knowledge systems.
  11. Soshnikov, D.: ROMEO: an ontology-based multi-agent architecture for online information retrieval (2021) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 249) [ClassicSimilarity], result of:
          0.014278769 = score(doc=249,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 249, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes an approach to path-finding in the intelligent graphs, with vertices being intelligent agents. A possible implementation of this approach is described, based on logical inference in distributed frame hierarchy. Presented approach can be used for implementing distributed intelligent information systems that include automatic navigation and path generation in hypertext, which can be used, for example in distance education, as well as for organizing intelligent web catalogues with flexible ontology-based information retrieval.
  12. Gnoli, C.: Faceted classifications as linked data : a logical analysis (2021) 0.00
    0.0023611297 = product of:
      0.014166778 = sum of:
        0.014166778 = weight(_text_:in in 452) [ClassicSimilarity], result of:
          0.014166778 = score(doc=452,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23857531 = fieldWeight in 452, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=452)
      0.16666667 = coord(1/6)
    
    Abstract
    Faceted knowledge organization systems have sophisticated logical structures, making their representation as linked data a demanding task. The term facet is often used in ambiguous ways: while in thesauri facets only work as semantic categories, in classification schemes they also have syntactic functions. The need to convert the Integrative Levels Classification (ILC) into SKOS stimulated a more general analysis of the different kinds of syntactic facets, as can be represented in terms of RDF properties and their respective domain and range. A nomenclature is proposed, distinguishing between common facets, which can be appended to any class, that is, have an unrestricted domain; and special facets, which are exclusive to some class, that is, have a restricted domain. In both cases, foci can be taken from any other class (unrestricted range: free facets), or only from subclasses of an existing class (parallel facets), or be defined specifically for the present class (bound facets). Examples are given of such cases in ILC and in the Dewey Decimal Classification (DDC).
  13. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.00
    0.0023517415 = product of:
      0.014110449 = sum of:
        0.014110449 = weight(_text_:in in 5732) [ClassicSimilarity], result of:
          0.014110449 = score(doc=5732,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2376267 = fieldWeight in 5732, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
      0.16666667 = coord(1/6)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  14. Simoes, G.; Machado, L.; Gnoli, C.; Souza, R.: Can an ontologically-oriented KO do without concepts? (2020) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 4964) [ClassicSimilarity], result of:
          0.013115887 = score(doc=4964,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 4964, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4964)
      0.16666667 = coord(1/6)
    
    Abstract
    The ontological approach in the development of KOS is an attempt to overcome the limitations of the traditional epistemological approach. Questions raise about the representation and organization of ontologically-oriented KO units, such as BFO universals or ILC phenomena. The study aims to compare the ontological approaches of BFO and ILC using a hermeneutic approach. We found that the differences between the units of the two systems are primarily due to the formal level of abstraction of BFO and the different organizations, namely the grouping of phenomena into ILC classes that represent complex compounds of entities in the BFO approach. In both systems the use of concepts is considered instrumental, although in the ILC they constitute the intersubjective component of the phenomena whereas in BFO they serve to access the entities of reality but are not part of them.
    Series
    Advances in knowledge organization; vol.17
  15. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 5365) [ClassicSimilarity], result of:
          0.013115887 = score(doc=5365,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 5365, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.16666667 = coord(1/6)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  16. Oliveira Machado, L.M.; Almeida, M.B.; Souza, R.R.: What researchers are currently saying about ontologies : a review of recent Web of Science articles (2020) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 5881) [ClassicSimilarity], result of:
          0.012620768 = score(doc=5881,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 5881, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5881)
      0.16666667 = coord(1/6)
    
    Abstract
    Traditionally connected to philosophy, the term ontology is increasingly related to information systems areas. Some researchers consider the approaches of the two disciplinary contexts to be completely different. Others consider that, although different, they should talk to each other, as both seek to answer similar questions. With the extensive literature on this topic, we intend to contribute to the understanding of the use of the term ontology in current research and which references support this use. An exploratory study was developed with a mixed methodology and a sample collected from the Web of Science of articles publishe in 2018. The results show the current prevalence of computer science in studies related to ontology and also of Gruber's view suggesting ontology as kind of conceptualization, a dominant view in that field. Some researchers, particularly in the field of biomedicine, do not adhere to this dominant view but to another one that seems closer to ontological study in the philosophical context. The term ontology, in the context of information systems, appears to be consolidating with a meaning different from the original, presenting traces of the process of "metaphorization" in the transfer of the term between the two fields of study.
  17. Kleineberg, M.: Classifying perspectives : expressing levels of knowing in the Integrative Levels Classification (2020) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 81) [ClassicSimilarity], result of:
          0.012620768 = score(doc=81,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 81, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=81)
      0.16666667 = coord(1/6)
    
    Series
    Advances in knowledge organization; vol.17
  18. Hudon, M.: Facet (2020) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 5899) [ClassicSimilarity], result of:
          0.012493922 = score(doc=5899,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 5899, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5899)
      0.16666667 = coord(1/6)
    
    Abstract
    S.R. Ranganathan is credited with the introduction of the term "facet" in the field of knowledge organization towards the middle of the twentieth century. Facets have traditionally been used to organize document collections and to express complex subjects. In the digital world, they act as filters to facilitate navigation and improve retrieval. But the popularity of the term does not mean that a definitive characterization of the concept has been established. Indeed, several conceptualizations of the facet co-exist. This article provides an overview of formal and informal definitions found in the literature of knowledge organization, followed by a discussion of four common conceptualizations of the facet: process vs product, nature vs function, object vs subject and organization vs navigation.
    Series
    Reviews of concepts in knowledge organization
  19. Biagetti, M.T.: Ontologies as knowledge organization systems (2021) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 439) [ClassicSimilarity], result of:
          0.012493922 = score(doc=439,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 439, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
      0.16666667 = coord(1/6)
    
    Abstract
    This contribution presents the principal features of ontologies, drawing special attention to the comparison between ontologies and the different kinds of know­ledge organization systems (KOS). The focus is on the semantic richness exhibited by ontologies, which allows the creation of a great number of relationships between terms. That establishes ontologies as the most evolved type of KOS. The concepts of "conceptualization" and "formalization" and the key components of ontologies are described and discussed, along with upper and domain ontologies and special typologies, such as bibliographical ontologies and biomedical ontologies. The use of ontologies in the digital libraries environment, where they have replaced thesauri for query expansion in searching, and the role they are playing in the Semantic Web, especially for semantic interoperability, are sketched.
    Series
    Reviews of Concepts in Knowledge Organization
  20. Collard, J.; Paiva, V. de; Fong, B.; Subrahmanian, E.: Extracting mathematical concepts from text (2022) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 668) [ClassicSimilarity], result of:
          0.012493922 = score(doc=668,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 668, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=668)
      0.16666667 = coord(1/6)
    
    Abstract
    We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences).

Authors

Languages

  • e 43
  • pt 3
  • d 2
  • More… Less…

Types

  • a 44
  • el 13
  • p 4
  • A 1
  • EL 1
  • More… Less…