Search (25 results, page 1 of 2)

  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[2020 TO 2030}
  1. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.00
    0.004466408 = product of:
      0.033498056 = sum of:
        0.017793551 = weight(_text_:und in 318) [ClassicSimilarity], result of:
          0.017793551 = score(doc=318,freq=4.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.27704588 = fieldWeight in 318, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.015704507 = product of:
          0.031409014 = sum of:
            0.031409014 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.031409014 = score(doc=318,freq=2.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Date
    22. 5.2021 12:43:05
  2. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.00
    0.0021190518 = product of:
      0.015892887 = sum of:
        0.011009198 = weight(_text_:und in 572) [ClassicSimilarity], result of:
          0.011009198 = score(doc=572,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.17141339 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.0048836893 = product of:
          0.009767379 = sum of:
            0.009767379 = weight(_text_:information in 572) [ClassicSimilarity], result of:
              0.009767379 = score(doc=572,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1920054 = fieldWeight in 572, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=572)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Auer, S.; Sens, I.; Stocker, M.: Erschließung wissenschaftlicher Literatur mit dem Open Research Knowledge Graph (2020) 0.00
    0.0018872913 = product of:
      0.028309368 = sum of:
        0.028309368 = weight(_text_:und in 551) [ClassicSimilarity], result of:
          0.028309368 = score(doc=551,freq=18.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.4407773 = fieldWeight in 551, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=551)
      0.06666667 = coord(1/15)
    
    Abstract
    Die Weitergabe von Wissen hat sich seit vielen hundert Jahren nicht grundlegend verändert: Sie erfolgt in der Regel dokumentenbasiert - früher als klassischer Aufsatz auf Papier gedruckt, heute als PDF. Bei jährlich rund 2,5 Millionen neuen Forschungsbeiträgen ertrinken die Forschenden in einer Flut pseudodigitalisierter PDF-Publikationen. Die Folge: Die Forschung wird ernsthaft geschwächt. Denn viele Forschungsergebnisse können durch andere nicht reproduziert werden, es gibt mehr und mehr Redundanzen und das Meer von Publikationen ist unübersichtlich. Deshalb denkt die TIB - Leibniz-Informationszentrum Technik und Naturwissenschaften Wissenskommunikation neu: Statt auf statische PDF-Artikel setzt die TIB auf Wissensgraphen. Sie arbeitet daran, Wissen unterschiedlichster Form - Texte, Bilder, Grafiken, Audio- und Video-Dateien, 3D-Modelle und vieles mehr - intuitiv mithilfe dynamischer Wissensgraphen zu vernetzen. Der Wissensgraph soll verschiedene Forschungsideen, -ansätze, -methoden und -ergebnisse maschinenlesbar darstellen, sodass völlig neue Zusammenhänge von Wissen zutage treten und zur Lösung globaler Probleme beitragen könnten. Die großen gesellschaftlichen Herausforderungen verlangen Interdisziplinarität und das Zusammenfügen von Erkenntnis-Einzelteilen. Mit dem Wissensgraphen kann das gelingen und der Fluss wissenschaftlicher Erkenntnisse revolutioniert werden.
  4. Fagundes, P.B.; Freund, G.P.; Vital, L.P.; Monteiro de Barros, C.; Macedo, D.D.J.de: Taxonomias, ontologias e tesauros : possibilidades de contribuição para o processo de Engenharia de Requisitos (2020) 0.00
    0.0017062648 = product of:
      0.012796985 = sum of:
        0.007863713 = weight(_text_:und in 5828) [ClassicSimilarity], result of:
          0.007863713 = score(doc=5828,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.12243814 = fieldWeight in 5828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
        0.0049332716 = product of:
          0.009866543 = sum of:
            0.009866543 = weight(_text_:information in 5828) [ClassicSimilarity], result of:
              0.009866543 = score(doc=5828,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19395474 = fieldWeight in 5828, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5828)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Some of the fundamental activities of the software development process are related to the discipline of Requirements Engineering, whose objective is the discovery, analysis, documentation and verification of the requirements that will be part of the system. Requirements are the conditions or capabilities that software must have or perform to meet the users needs. The present study is being developed to propose a model of cooperation between Information Science and Requirements Engineering. Aims to present the analysis results on the possibilities of using the knowledge organization systems: taxonomies, thesauri and ontologies during the activities of Requirements Engineering: design, survey, elaboration, negotiation, specification, validation and requirements management. From the results obtained it was possible to identify in which stage of the Requirements Engineering process, each type of knowledge organization system could be used. We expect that this study put in evidence the need for new researchs and proposals to strengt the exchange between Information Science, as a science that has information as object of study, and the Requirements Engineering which has in the information the raw material to identify the informational needs of software users.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Auer, S.; Oelen, A.; Haris, A.M.; Stocker, M.; D'Souza, J.; Farfar, K.E.; Vogt, L.; Prinz, M.; Wiens, V.; Jaradeh, M.Y.: Improving access to scientific literature with knowledge graphs : an experiment using library guidelines to judge information integrity (2020) 0.00
    0.0015136085 = product of:
      0.011352063 = sum of:
        0.007863713 = weight(_text_:und in 316) [ClassicSimilarity], result of:
          0.007863713 = score(doc=316,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.12243814 = fieldWeight in 316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=316)
        0.0034883497 = product of:
          0.0069766995 = sum of:
            0.0069766995 = weight(_text_:information in 316) [ClassicSimilarity], result of:
              0.0069766995 = score(doc=316,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.13714671 = fieldWeight in 316, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=316)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    The transfer of knowledge has not changed fundamentally for many hundreds of years: It is usually document-based-formerly printed on paper as a classic essay and nowadays as PDF. With around 2.5 million new research contributions every year, researchers drown in a flood of pseudo-digitized PDF publications. As a result research is seriously weakened. In this article, we argue for representing scholarly contributions in a structured and semantic way as a knowledge graph. The advantage is that information represented in a knowledge graph is readable by machines and humans. As an example, we give an overview on the Open Research Knowledge Graph (ORKG), a service implementing this approach. For creating the knowledge graph representation, we rely on a mixture of manual (crowd/expert sourcing) and (semi-)automated techniques. Only with such a combination of human and machine intelligence, we can achieve the required quality of the representation to allow for novel exploration and assistance services for researchers. As a result, a scholarly knowledge graph such as the ORKG can be used to give a condensed overview on the state-of-the-art addressing a particular research quest, for example as a tabular comparison of contributions according to various characteristics of the approaches. Further possible intuitive access interfaces to such scholarly knowledge graphs include domain-specific (chart) visualizations or answering of natural language questions.
    Source
    Bibliothek: Forschung und Praxis. 44(2020) H.3, S.516-529
  6. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.00
    0.0014190577 = product of:
      0.021285865 = sum of:
        0.021285865 = sum of:
          0.0055813594 = weight(_text_:information in 179) [ClassicSimilarity], result of:
            0.0055813594 = score(doc=179,freq=4.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.10971737 = fieldWeight in 179, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.015704507 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.015704507 = score(doc=179,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.06666667 = coord(1/15)
    
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Showcasing Doctoral Research in Information Science.
    Source
    Aslib journal of information management. 72(2020) no.4, S.671-685
  7. Amirhosseini, M.; Avidan, G.: ¬A dialectic perspective on the evolution of thesauri and ontologies (2021) 0.00
    0.00137738 = product of:
      0.010330349 = sum of:
        0.007863713 = weight(_text_:und in 592) [ClassicSimilarity], result of:
          0.007863713 = score(doc=592,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.12243814 = fieldWeight in 592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=592)
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 592) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=592,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=592)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    The purpose of this article is to identify the most important factors and features in the evolution of thesauri and ontologies through a dialectic model. This model relies on a dialectic process or idea which could be discovered via a dialectic method. This method has focused on identifying the logical relationship between a beginning proposition, or an idea called a thesis, a negation of that idea called the antithesis, and the result of the conflict between the two ideas, called a synthesis. During the creation of knowl­edge organization systems (KOSs), the identification of logical relations between different ideas has been made possible through the consideration and use of the most influential methods and tools such as dictionaries, Roget's Thesaurus, thesaurus, micro-, macro- and metathesauri, ontology, lower, middle and upper level ontologies. The analysis process has adapted a historical methodology, more specifically a dialectic method and documentary method as the reasoning process. This supports our arguments and synthesizes a method for the analysis of research results. Confirmed by the research results, the principle of unity has shown to be the most important factor in the development and evolution of the structure of knowl­edge organization systems and their types. There are various types of unity when considering the analysis of logical relations. These include the principle of unity of alphabetical order, unity of science, semantic unity, structural unity and conceptual unity. The results have clearly demonstrated a movement from plurality to unity in the assembling of the complex structure of knowl­edge organization systems to increase information and knowl­edge storage and retrieval performance.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.00
    6.5435446E-4 = product of:
      0.009815317 = sum of:
        0.009815317 = product of:
          0.019630633 = sum of:
            0.019630633 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.019630633 = score(doc=106,freq=2.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Date
    22. 1.2021 14:24:32
  9. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.00
    4.604387E-4 = product of:
      0.00690658 = sum of:
        0.00690658 = product of:
          0.01381316 = sum of:
            0.01381316 = weight(_text_:information in 667) [ClassicSimilarity], result of:
              0.01381316 = score(doc=667,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.27153665 = fieldWeight in 667, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=667)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
  10. Soshnikov, D.: ROMEO: an ontology-based multi-agent architecture for online information retrieval (2021) 0.00
    4.5571616E-4 = product of:
      0.006835742 = sum of:
        0.006835742 = product of:
          0.013671484 = sum of:
            0.013671484 = weight(_text_:information in 249) [ClassicSimilarity], result of:
              0.013671484 = score(doc=249,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.2687516 = fieldWeight in 249, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This paper describes an approach to path-finding in the intelligent graphs, with vertices being intelligent agents. A possible implementation of this approach is described, based on logical inference in distributed frame hierarchy. Presented approach can be used for implementing distributed intelligent information systems that include automatic navigation and path generation in hypertext, which can be used, for example in distance education, as well as for organizing intelligent web catalogues with flexible ontology-based information retrieval.
  11. Almeida, M.B.; Felipe, E.R.; Barcelos, R.: Toward a document-centered ontological theory for information architecture in corporations (2020) 0.00
    4.3507366E-4 = product of:
      0.0065261046 = sum of:
        0.0065261046 = product of:
          0.013052209 = sum of:
            0.013052209 = weight(_text_:information in 8) [ClassicSimilarity], result of:
              0.013052209 = score(doc=8,freq=14.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.256578 = fieldWeight in 8, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=8)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The beginning of the 21st century attested to the first movements toward information architecture (IA), originating from the field of library and information science (LIS). IA is acknowledged as an important meta-discipline concerned with the design, implementation, and maintenance of digital information spaces. Despite the relevance of IA, there is little research about the subject within LIS, and still less if one considers initiatives for creating a theory for IA. In this article, we provide a theory for IA and describe the resources needed to create it through ontological models. We also choose the "document" as the key entity for such theory, contemplating kinds of documents that not only serve to register information, but also create claims and obligations in society. To achieve our goals, we provide a background for subtheories from LIS and from Applied Ontology. As a result, we present some basic theory for IA in the form of a formal framework to represent corporations in which IA activities take place, acknowledging that our approach is de facto a subset of IA we call the enterprise information architecture (EAI) approach. By doing this, we highlight the effects that documents cause within corporations in the scope of EIA.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.11, S.1308-1326
  12. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.00
    3.9466174E-4 = product of:
      0.005919926 = sum of:
        0.005919926 = product of:
          0.011839852 = sum of:
            0.011839852 = weight(_text_:information in 5365) [ClassicSimilarity], result of:
              0.011839852 = score(doc=5365,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274569 = fieldWeight in 5365, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  13. Baroncini, S.; Sartini, B.; Erp, M. Van; Tomasi, F.; Gangemi, A.: Is dc:subject enough? : A landscape on iconography and iconology statements of knowledge graphs in the semantic web (2023) 0.00
    3.7209064E-4 = product of:
      0.0055813594 = sum of:
        0.0055813594 = product of:
          0.011162719 = sum of:
            0.011162719 = weight(_text_:information in 1030) [ClassicSimilarity], result of:
              0.011162719 = score(doc=1030,freq=16.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.21943474 = fieldWeight in 1030, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1030)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects. Design/methodology/approach This study's analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians' theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures' suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness. Findings This study's results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity. Originality/value The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study's results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
  14. Ghosh, S.S.; Das, S.; Chatterjee, S.K.: Human-centric faceted approach for ontology construction (2020) 0.00
    3.2888478E-4 = product of:
      0.0049332716 = sum of:
        0.0049332716 = product of:
          0.009866543 = sum of:
            0.009866543 = weight(_text_:information in 5731) [ClassicSimilarity], result of:
              0.009866543 = score(doc=5731,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19395474 = fieldWeight in 5731, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5731)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    In this paper, we propose an ontology building method, called human-centric faceted approach for ontology construction (HCFOC). HCFOC uses the human-centric approach, improvised with the idea of selective dissemination of information (SDI), to deal with context. Further, this ontology construction process makes use of facet analysis and an analytico-synthetic classification approach. This novel fusion contributes to the originality of HCFOC and distinguishes it from other existing ontology construction methodologies. Based on HCFOC, an ontology of the tourism domain has been designed using the Protégé-5.5.0 ontology editor. The HCFOC methodology has provided the necessary flexibility, extensibility, robustness and has facilitated the capturing of background knowledge. It models the tourism ontology in such a way that it is able to deal with the context of a tourist's information need with precision. This is evident from the result that more than 90% of the user's queries were successfully met. The use of domain knowledge and techniques from both library and information science and computer science has helped in the realization of the desired purpose of this ontology construction process. It is envisaged that HCFOC will have implications for ontology developers. The demonstrated tourism ontology can support any tourism information retrieval system.
  15. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.00
    3.2888478E-4 = product of:
      0.0049332716 = sum of:
        0.0049332716 = product of:
          0.009866543 = sum of:
            0.009866543 = weight(_text_:information in 5732) [ClassicSimilarity], result of:
              0.009866543 = score(doc=5732,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19395474 = fieldWeight in 5732, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5732)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  16. Campos, L.M.: Princípios teóricos usados na elaboracao de ontologias e sua influência na recuperacao da informacao com uso de de inferências [Theoretical principles used in ontology building and their influence on information retrieval using inferences] (2021) 0.00
    2.848226E-4 = product of:
      0.004272339 = sum of:
        0.004272339 = product of:
          0.008544678 = sum of:
            0.008544678 = weight(_text_:information in 826) [ClassicSimilarity], result of:
              0.008544678 = score(doc=826,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16796975 = fieldWeight in 826, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=826)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Several instruments of knowledge organization will reflect different possibilities for information retrieval. In this context, ontologies have a different potential because they allow knowledge discovery, which can be used to retrieve information in a more flexible way. However, this potential can be affected by the theoretical principles adopted in ontology building. The aim of this paper is to discuss, in an introductory way, how a (not exhaustive) set of theoretical principles can influence an aspect of ontologies: their use to obtain inferences. In this context, the role of Ingetraut Dahlberg's Theory of Concept is discussed. The methodology is exploratory, qualitative, and from the technical point of view it uses bibliographic research supported by the content analysis method. It also presents a small example of application as a proof of concept. As results, a discussion about the influence of conceptual definition on subsumption inferences is presented, theoretical contributions are suggested that should be used to guide the formation of hierarchical structures on which such inferences are supported, and examples are provided of how the absence of such contributions can lead to erroneous inferences
  17. Zhou, H.; Guns, R.; Engels, T.C.E.: Towards indicating interdisciplinarity : characterizing interdisciplinary knowledge flow (2023) 0.00
    2.79068E-4 = product of:
      0.0041860198 = sum of:
        0.0041860198 = product of:
          0.0083720395 = sum of:
            0.0083720395 = weight(_text_:information in 1072) [ClassicSimilarity], result of:
              0.0083720395 = score(doc=1072,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16457605 = fieldWeight in 1072, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1072)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This study contributes to the recent discussions on indicating interdisciplinarity, that is, going beyond catch-all metrics of interdisciplinarity. We propose a contextual framework to improve the granularity and usability of the existing methodology for interdisciplinary knowledge flow (IKF) in which scientific disciplines import and export knowledge from/to other disciplines. To characterize the knowledge exchange between disciplines, we recognize three aspects of IKF under this framework, namely broadness, intensity, and homogeneity. We show how to utilize them to uncover different forms of interdisciplinarity, especially between disciplines with the largest volume of IKF. We apply this framework in two use cases, one at the level of disciplines and one at the level of journals, to show how it can offer a more holistic and detailed viewpoint on the interdisciplinarity of scientific entities than aggregated and context-unaware indicators. We further compare our proposed framework, an indicating process, with established indicators and discuss how such information tools on interdisciplinarity can assist science policy practices such as performance-based research funding systems and panel-based peer review processes.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.11, S.1325-1340
  18. Jansen, B.; Browne, G.M.: Navigating information spaces : index / mind map / topic map? (2021) 0.00
    2.6310782E-4 = product of:
      0.0039466172 = sum of:
        0.0039466172 = product of:
          0.0078932345 = sum of:
            0.0078932345 = weight(_text_:information in 436) [ClassicSimilarity], result of:
              0.0078932345 = score(doc=436,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1551638 = fieldWeight in 436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=436)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
  19. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.00
    2.6310782E-4 = product of:
      0.0039466172 = sum of:
        0.0039466172 = product of:
          0.0078932345 = sum of:
            0.0078932345 = weight(_text_:information in 1004) [ClassicSimilarity], result of:
              0.0078932345 = score(doc=1004,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1551638 = fieldWeight in 1004, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  20. Oliveira Machado, L.M.; Almeida, M.B.; Souza, R.R.: What researchers are currently saying about ontologies : a review of recent Web of Science articles (2020) 0.00
    2.3255666E-4 = product of:
      0.0034883497 = sum of:
        0.0034883497 = product of:
          0.0069766995 = sum of:
            0.0069766995 = weight(_text_:information in 5881) [ClassicSimilarity], result of:
              0.0069766995 = score(doc=5881,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.13714671 = fieldWeight in 5881, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5881)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Traditionally connected to philosophy, the term ontology is increasingly related to information systems areas. Some researchers consider the approaches of the two disciplinary contexts to be completely different. Others consider that, although different, they should talk to each other, as both seek to answer similar questions. With the extensive literature on this topic, we intend to contribute to the understanding of the use of the term ontology in current research and which references support this use. An exploratory study was developed with a mixed methodology and a sample collected from the Web of Science of articles publishe in 2018. The results show the current prevalence of computer science in studies related to ontology and also of Gruber's view suggesting ontology as kind of conceptualization, a dominant view in that field. Some researchers, particularly in the field of biomedicine, do not adhere to this dominant view but to another one that seems closer to ontological study in the philosophical context. The term ontology, in the context of information systems, appears to be consolidating with a meaning different from the original, presenting traces of the process of "metaphorization" in the transfer of the term between the two fields of study.