Search (328 results, page 1 of 17)

  • × type_ss:"a"
  • × theme_ss:"Wissensrepräsentation"
  1. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.16
    0.15762594 = product of:
      0.21016793 = sum of:
        0.025691241 = weight(_text_:information in 1436) [ClassicSimilarity], result of:
          0.025691241 = score(doc=1436,freq=28.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.29028487 = fieldWeight in 1436, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.117099695 = weight(_text_:standards in 1436) [ClassicSimilarity], result of:
          0.117099695 = score(doc=1436,freq=14.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.5211374 = fieldWeight in 1436, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.067376986 = sum of:
          0.040054493 = weight(_text_:organization in 1436) [ClassicSimilarity], result of:
            0.040054493 = score(doc=1436,freq=4.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.22283478 = fieldWeight in 1436, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
          0.027322493 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
            0.027322493 = score(doc=1436,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.15476047 = fieldWeight in 1436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
      0.75 = coord(3/4)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  2. Rocha Souza, R.; Lemos, D.: a comparative analysis : Knowledge organization systems for the representation of multimedia resources on the Web (2020) 0.11
    0.10573533 = product of:
      0.14098044 = sum of:
        0.01029941 = weight(_text_:information in 5993) [ClassicSimilarity], result of:
          0.01029941 = score(doc=5993,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 5993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5993)
        0.093888626 = weight(_text_:standards in 5993) [ClassicSimilarity], result of:
          0.093888626 = score(doc=5993,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.41783947 = fieldWeight in 5993, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=5993)
        0.036792405 = product of:
          0.07358481 = sum of:
            0.07358481 = weight(_text_:organization in 5993) [ClassicSimilarity], result of:
              0.07358481 = score(doc=5993,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.40937364 = fieldWeight in 5993, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5993)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The lack of standardization in the production, organization and dissemination of information in documentation centers and institutions alike, as a result from the digitization of collections and their availability on the internet has called for integration efforts. The sheer availability of multimedia content has fostered the development of many distinct and, most of the time, independent metadata standards for its description. This study aims at presenting and comparing the existing standards of metadata, vocabularies and ontologies for multimedia annotation and also tries to offer a synthetic overview of its main strengths and weaknesses, aiding efforts for semantic integration and enhancing the findability of available multimedia resources on the web. We also aim at unveiling the characteristics that could, should and are perhaps not being highlighted in the characterization of multimedia resources.
    Source
    Knowledge organization. 47(2020) no.4, S.300-319
  3. Melgar Estrada, L.M.: Topic maps from a knowledge organization perspective (2011) 0.09
    0.09257929 = product of:
      0.12343906 = sum of:
        0.014565565 = weight(_text_:information in 4298) [ClassicSimilarity], result of:
          0.014565565 = score(doc=4298,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 4298, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4298)
        0.066389285 = weight(_text_:standards in 4298) [ClassicSimilarity], result of:
          0.066389285 = score(doc=4298,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 4298, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=4298)
        0.042484205 = product of:
          0.08496841 = sum of:
            0.08496841 = weight(_text_:organization in 4298) [ClassicSimilarity], result of:
              0.08496841 = score(doc=4298,freq=8.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.47270393 = fieldWeight in 4298, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4298)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This article comprises a literature review and conceptual analysis of Topic Maps-the ISO standard for representing information about the structure of information resources-according to the principles of Knowledge Organization (KO). Using the main principles from this discipline, the study shows how Topic Maps is proposed as an ontology model independent of technology. Topic Maps constitutes a 'bibliographic' meta-language able to represent, extend, and integrate almost all existing Knowledge Organization Systems (KOS) in a standards-based generic model applicable to digital content and to the Web. This report also presents an inventory of the current applications of Topic Maps in Libraries, Archives, and Museums (LAM), as well as in the Digital Humanities. Finally, some directions for further research are suggested, which relate Topic Maps to the main research trends in KO.
    Source
    Knowledge organization. 38(2011) no.1, S.43-61
  4. Kless, D.: Erstellung eines allgemeinen Standards zur Wissensorganisation : Nutzen, Möglichkeiten, Herausforderungen, Wege (2010) 0.06
    0.06386268 = product of:
      0.12772536 = sum of:
        0.1106488 = weight(_text_:standards in 4422) [ClassicSimilarity], result of:
          0.1106488 = score(doc=4422,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 4422, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4422)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 4422) [ClassicSimilarity], result of:
              0.03415312 = score(doc=4422,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 4422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4422)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Zur Organisation und zum besseren Auffinden von Wissen werden häufig verschiedene Typen von Vokabularen verwendet. Aufgrund ihres Ursprungs in unterschiedlichen Communities werden die Vokabulare mit unterschiedlicher Terminologie sowie jeweils eigenen Methoden und Werkzeugen beschrieben und sind, wenn überhaupt, unterschiedlich stark und mit unterschiedlichem Fokus standardisiert. Um dieser Entwicklung zu entgegnen, müssen zum einen die Standards für die verschiedenen Vokabulartypen (weiter-)entwickelt werden und dabei auf gemeinsame, heute allgemein anerkannte Modellierungssprachen (z.B. UML) und XML-basierte Auszeichnungssprachen zurückgreifen. Zum anderen ist ein Meta-Standard nötig, der die Terminologie der verschiedenen Communities aufeinander abbildet und die Vokabulare vergleichbar macht. Dies würde nicht nur die qualifizierte Auswahl eines Vokabulartyps ermöglichen, sondern auch deren gegenseitiges Abbilden (Mappen) und allgemein der Wiederverwendung von Vokabularen nutzen. In Ansätzen wurde diese Strategie im jüngst veröffentlichten britischen Standard BS 8723 verfolgt, dessen Schwerpunkt (weiter) auf Thesauri liegt, der jedoch auch explizit Bezug zu anderen Vokabularen nimmt. Die im April 2007 begonnene Revision des Standards als internationale ISO-Norm 25964 erlaubt weitere, wenn auch vielleicht kleine Schritte hin zu einer langfristigen Vision von allgemeingültigen Standards zur Wissensorganisation.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  5. Fischer, W.; Bauer, B.: Combining ontologies and natural language (2010) 0.05
    0.05125823 = product of:
      0.10251646 = sum of:
        0.02427594 = weight(_text_:information in 3740) [ClassicSimilarity], result of:
          0.02427594 = score(doc=3740,freq=16.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 3740, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3740)
        0.07824052 = weight(_text_:standards in 3740) [ClassicSimilarity], result of:
          0.07824052 = score(doc=3740,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34819958 = fieldWeight in 3740, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3740)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies are a popular concept for capturing semantic knowledge of the world in a computer understandable way. Todays ontological standards have been designed with primarily the logical formalisms in mind and therefore leaving the linguistic information aside. However knowledge is rarely just about the semantic information itself. In order to create and modify existing ontologies users have to be able to understand the information represented by them. Other problem domains (e.g. Natural Language Processing, NLP) can build on ontological information however a bridge to syntactic information is missing. Therefore in this paper we argue that the possibilities of todays standards like OWL, RDF, etc. are not enough to provide a sound combination of syntax and semantics. Therefore we present an approach for the linguistic enrichment of ontologies inspired by cognitive linguistics. The goal is to provide a generic, language independent approach on modelling semantics which can be annotated with arbitrary linguistic information. This knowledge can then be used for a better documentation of ontologies as well as for NLP and other Information Extraction (IE) related tasks.
    Footnote
    Preprint. To be published as Vol 122 in the Conferences in Research and Practice in Information Technology Series by the Australian Computer Society Inc. http://crpit.com/.
  6. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.05
    0.051098473 = product of:
      0.0681313 = sum of:
        0.009710376 = weight(_text_:information in 311) [ClassicSimilarity], result of:
          0.009710376 = score(doc=311,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.044259522 = weight(_text_:standards in 311) [ClassicSimilarity], result of:
          0.044259522 = score(doc=311,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.014161401 = product of:
          0.028322803 = sum of:
            0.028322803 = weight(_text_:organization in 311) [ClassicSimilarity], result of:
              0.028322803 = score(doc=311,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.15756798 = fieldWeight in 311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=311)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
    Methodology Based on information from research papers, more research was done on SKOS and examples of SKOS and shared faceted classifications in the Semantic Web and about SKOS and how to express SKOS in RDF/XML. Once confident with these ideas, the author used a faceted taxonomy created in a Vocabulary Design class and encoded it using SKOS. Instead of writing RDF in a program such as Notepad, a thesaurus tool was used to create the taxonomy according to SKOS standards and then export the thesaurus in RDF/XML format. These processes and tools are then analyzed. Results The initial statement of the problem was simply an extension of the survey paper done earlier in this class. To continue on with the research, more research was done into SKOS - a standard for expressing thesauri, taxonomies and faceted classifications so they can be shared on the semantic web.
  7. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.05
    0.04765854 = product of:
      0.09531708 = sum of:
        0.07824052 = weight(_text_:standards in 4553) [ClassicSimilarity], result of:
          0.07824052 = score(doc=4553,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34819958 = fieldWeight in 4553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.03415312 = score(doc=4553,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  8. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.05
    0.046916284 = product of:
      0.09383257 = sum of:
        0.02427594 = weight(_text_:information in 2831) [ClassicSimilarity], result of:
          0.02427594 = score(doc=2831,freq=16.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 2831, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.06955662 = sum of:
          0.035403505 = weight(_text_:organization in 2831) [ClassicSimilarity], result of:
            0.035403505 = score(doc=2831,freq=2.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.19695997 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
          0.03415312 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
            0.03415312 = score(doc=2831,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.19345059 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
      0.5 = coord(2/4)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
    Source
    Knowledge organization. 43(2016) no.1, S.3-12
  9. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.05
    0.045186404 = product of:
      0.09037281 = sum of:
        0.0800734 = product of:
          0.2402202 = sum of:
            0.2402202 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.2402202 = score(doc=400,freq=2.0), product of:
                0.42742437 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050415643 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.01029941 = weight(_text_:information in 400) [ClassicSimilarity], result of:
          0.01029941 = score(doc=400,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  10. Kruk, S.R.; McDaniel, B.: Goals of semantic digital libraries (2009) 0.04
    0.043494053 = product of:
      0.08698811 = sum of:
        0.02059882 = weight(_text_:information in 3378) [ClassicSimilarity], result of:
          0.02059882 = score(doc=3378,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 3378, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
        0.066389285 = weight(_text_:standards in 3378) [ClassicSimilarity], result of:
          0.066389285 = score(doc=3378,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 3378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
      0.5 = coord(2/4)
    
    Abstract
    Digital libraries have become commodity in the current world of Internet. More and more information is produced, and more and more non-digital information is being rendered available. The new, more user friendly, community-oriented technologies used throughout the Internet are raising the bar of expectations. Digital libraries cannot stand still with their technologies; if not for the sake of handling rapidly growing amount and diversity of information, they must provide for better user experience matching and overgrowing standards set by the industry. The next generation of digital libraries combine technological solutions, such as P2P, SOA, or Grid, with recent research on semantics and social networks. These solutions are put into practice to answer a variety of requirements imposed on digital libraries.
    Theme
    Information Gateway
  11. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.04
    0.043361153 = product of:
      0.08672231 = sum of:
        0.017165681 = weight(_text_:information in 633) [ClassicSimilarity], result of:
          0.017165681 = score(doc=633,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 633, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.06955662 = sum of:
          0.035403505 = weight(_text_:organization in 633) [ClassicSimilarity], result of:
            0.035403505 = score(doc=633,freq=2.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.19695997 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.03415312 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.03415312 = score(doc=633,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.5 = coord(2/4)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
    Source
    Knowledge organization. 39(2012) no.6, S.432-445
  12. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.04
    0.042682633 = product of:
      0.085365266 = sum of:
        0.02277285 = weight(_text_:information in 758) [ClassicSimilarity], result of:
          0.02277285 = score(doc=758,freq=22.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.25731003 = fieldWeight in 758, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.06259242 = weight(_text_:standards in 758) [ClassicSimilarity], result of:
          0.06259242 = score(doc=758,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.27855965 = fieldWeight in 758, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
      0.5 = coord(2/4)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.
  13. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.04
    0.040477425 = product of:
      0.08095485 = sum of:
        0.014565565 = weight(_text_:information in 2186) [ClassicSimilarity], result of:
          0.014565565 = score(doc=2186,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 2186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.066389285 = weight(_text_:standards in 2186) [ClassicSimilarity], result of:
          0.066389285 = score(doc=2186,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
      0.5 = coord(2/4)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
  14. Almeida Campos, M.L. de; Machado Campos, M.L.; Dávila, A.M.R.; Espanha Gomes, H.; Campos, L.M.; Lira e Oliveira, L. de: Information sciences methodological aspects applied to ontology reuse tools : a study based on genomic annotations in the domain of trypanosomatides (2013) 0.04
    0.03906973 = product of:
      0.07813946 = sum of:
        0.008582841 = weight(_text_:information in 635) [ClassicSimilarity], result of:
          0.008582841 = score(doc=635,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=635)
        0.06955662 = sum of:
          0.035403505 = weight(_text_:organization in 635) [ClassicSimilarity], result of:
            0.035403505 = score(doc=635,freq=2.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.19695997 = fieldWeight in 635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.0390625 = fieldNorm(doc=635)
          0.03415312 = weight(_text_:22 in 635) [ClassicSimilarity], result of:
            0.03415312 = score(doc=635,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.19345059 = fieldWeight in 635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=635)
      0.5 = coord(2/4)
    
    Date
    22. 2.2013 12:03:53
    Source
    Knowledge organization. 40(2013) no.1, S.50-61
  15. Qin, J.: ¬A relation typology in knowledge organization systems : case studies in the research data management domain (2018) 0.04
    0.035189077 = product of:
      0.070378155 = sum of:
        0.013732546 = weight(_text_:information in 4773) [ClassicSimilarity], result of:
          0.013732546 = score(doc=4773,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 4773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4773)
        0.056645606 = product of:
          0.11329121 = sum of:
            0.11329121 = weight(_text_:organization in 4773) [ClassicSimilarity], result of:
              0.11329121 = score(doc=4773,freq=8.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.6302719 = fieldWeight in 4773, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4773)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Series
    Advances in knowledge organization; vol.16
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  16. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.03
    0.0336169 = product of:
      0.0672338 = sum of:
        0.017165681 = weight(_text_:information in 5732) [ClassicSimilarity], result of:
          0.017165681 = score(doc=5732,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 5732, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
        0.050068118 = product of:
          0.100136235 = sum of:
            0.100136235 = weight(_text_:organization in 5732) [ClassicSimilarity], result of:
              0.100136235 = score(doc=5732,freq=16.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.55708694 = fieldWeight in 5732, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5732)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
    Source
    Knowledge organization. 47(2020) no.1, S.45-55
  17. Stuckenschmidt, H.; Harmelen, F van; Waard, A. de; Scerri, T.; Bhogal, R.; Buel, J. van; Crowlesmith, I.; Fluit, C.; Kampman, A.; Broekstra, J.; Mulligen, E. van: Exploring large document repositories with RDF technology : the DOPE project (2004) 0.03
    0.03298629 = product of:
      0.06597258 = sum of:
        0.021713063 = weight(_text_:information in 762) [ClassicSimilarity], result of:
          0.021713063 = score(doc=762,freq=20.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2453355 = fieldWeight in 762, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
        0.044259522 = weight(_text_:standards in 762) [ClassicSimilarity], result of:
          0.044259522 = score(doc=762,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
      0.5 = coord(2/4)
    
    Abstract
    This thesaurus-based search system uses automatic indexing, RDF-based querying, and concept-based visualization of results to support exploration of large online document repositories. Innovative research institutes rely on the availability of complete and accurate information about new research and development. Information providers such as Elsevier make it their business to provide the required information in a cost-effective way. The Semantic Web will likely contribute significantly to this effort because it facilitates access to an unprecedented quantity of data. The DOPE project (Drug Ontology Project for Elsevier) explores ways to provide access to multiple lifescience information sources through a single interface. With the unremitting growth of scientific information, integrating access to all this information remains an important problem, primarily because the information sources involved are so heterogeneous. Sources might use different syntactic standards (syntactic heterogeneity), organize information in different ways (structural heterogeneity), and even use different terminologies to refer to the same information (semantic heterogeneity). Integrated access hinges on the ability to address these different kinds of heterogeneity. Also, mental models and keywords for accessing data generally diverge between subject areas and communities; hence, many different ontologies have emerged. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. To serve this need, we've developed a thesaurus-based search system that uses automatic indexing, RDF-based querying, and concept-based visualization. We describe here the conversion of an existing proprietary thesaurus to an open standard format, a generic architecture for thesaurus-based information access, an innovative user interface, and results of initial user studies with the resulting DOPE system.
  18. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.03
    0.032677837 = product of:
      0.06535567 = sum of:
        0.009710376 = weight(_text_:information in 1634) [ClassicSimilarity], result of:
          0.009710376 = score(doc=1634,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.055645294 = sum of:
          0.028322803 = weight(_text_:organization in 1634) [ClassicSimilarity], result of:
            0.028322803 = score(doc=1634,freq=2.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.15756798 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.027322493 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.027322493 = score(doc=1634,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.5, S.494-518
  19. Guns, R.: Tracing the origins of the semantic web (2013) 0.03
    0.03195362 = product of:
      0.06390724 = sum of:
        0.008582841 = weight(_text_:information in 1093) [ClassicSimilarity], result of:
          0.008582841 = score(doc=1093,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 1093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
        0.0553244 = weight(_text_:standards in 1093) [ClassicSimilarity], result of:
          0.0553244 = score(doc=1093,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 1093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
      0.5 = coord(2/4)
    
    Abstract
    The Semantic Web has been criticized for not being semantic. This article examines the questions of why and how the Web of Data, expressed in the Resource Description Framework (RDF), has come to be known as the Semantic Web. Contrary to previous papers, we deliberately take a descriptive stance and do not start from preconceived ideas about the nature of semantics. Instead, we mainly base our analysis on early design documents of the (Semantic) Web. The main determining factor is shown to be link typing, coupled with the influence of online metadata. Both factors already were present in early web standards and drafts. Our findings indicate that the Semantic Web is directly linked to older artificial intelligence work, despite occasional claims to the contrary. Because of link typing, the Semantic Web can be considered an example of a semantic network. Originally network representations of the meaning of natural language utterances, semantic networks have eventually come to refer to any networks with typed (usually directed) links. We discuss possible causes for this shift and suggest that it may be due to confounding paradigmatic and syntagmatic semantic relations.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.2173-2181
  20. Khoo, C.S.G.; Zhang, D.; Wang, M.; Yun, X.J.: Subject organization in three types of information resources : an exploratory study (2012) 0.03
    0.03192913 = product of:
      0.06385826 = sum of:
        0.02427594 = weight(_text_:information in 831) [ClassicSimilarity], result of:
          0.02427594 = score(doc=831,freq=16.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 831, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=831)
        0.039582323 = product of:
          0.07916465 = sum of:
            0.07916465 = weight(_text_:organization in 831) [ClassicSimilarity], result of:
              0.07916465 = score(doc=831,freq=10.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.44041592 = fieldWeight in 831, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=831)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge tends to be structured differently in different types of information resources and information genres due to the different purposes of the resource/genre, and the characteristics of the media or format of the resource. This study investigates subject organization in three types of information resources: books (i.e. monographs), Web directories and information websites that provide information on particular subjects. Twelve subjects (topics) were selected in the areas of science, arts/humanities and social science, and two books, two Web directories and two information websites were sampled for each subject. The top two levels of the hierarchical subject organization in each resource were harvested and analyzed. Books have the highest proportion of general subject categories (e.g. history, theory and definition) and process categories (indicating step-by-step instructions). Information websites have the highest proportion of target user categories and genre-specific categories (e.g. about us and contact us), whereas Web directories have the highest proportion of specialty categories (i.e. sub-disciplines), industry-role categories (e.g. stores, schools and associations) and format categories (e.g. books, blogs and videos). Some disciplinary differences were also identified.
    Series
    Advances in knowledge organization; vol.13
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan

Years

Languages

  • e 282
  • d 39
  • pt 3
  • sp 1
  • More… Less…

Types

  • el 41
  • x 1
  • More… Less…