Search (448 results, page 1 of 23)

  • × theme_ss:"Wissensrepräsentation"
  1. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.16
    0.15762594 = product of:
      0.21016793 = sum of:
        0.025691241 = weight(_text_:information in 1436) [ClassicSimilarity], result of:
          0.025691241 = score(doc=1436,freq=28.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.29028487 = fieldWeight in 1436, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.117099695 = weight(_text_:standards in 1436) [ClassicSimilarity], result of:
          0.117099695 = score(doc=1436,freq=14.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.5211374 = fieldWeight in 1436, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.067376986 = sum of:
          0.040054493 = weight(_text_:organization in 1436) [ClassicSimilarity], result of:
            0.040054493 = score(doc=1436,freq=4.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.22283478 = fieldWeight in 1436, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
          0.027322493 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
            0.027322493 = score(doc=1436,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.15476047 = fieldWeight in 1436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
      0.75 = coord(3/4)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  2. Definition of the CIDOC Conceptual Reference Model (2003) 0.12
    0.122677386 = product of:
      0.16356985 = sum of:
        0.01029941 = weight(_text_:information in 1652) [ClassicSimilarity], result of:
          0.01029941 = score(doc=1652,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 1652, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1652)
        0.13277857 = weight(_text_:standards in 1652) [ClassicSimilarity], result of:
          0.13277857 = score(doc=1652,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.59091425 = fieldWeight in 1652, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=1652)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
              0.04098374 = score(doc=1652,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 1652, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1652)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
    Editor
    ICOM/CIDOC Documentation Standards Group
    Issue
    Version 3.4.9 - 30.11.2003. Produced by the ICOM/CIDOC Documentation Standards Group, continued by the CIDOC CRM Special Interest Group.
  3. Rocha Souza, R.; Lemos, D.: a comparative analysis : Knowledge organization systems for the representation of multimedia resources on the Web (2020) 0.11
    0.10573533 = product of:
      0.14098044 = sum of:
        0.01029941 = weight(_text_:information in 5993) [ClassicSimilarity], result of:
          0.01029941 = score(doc=5993,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 5993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5993)
        0.093888626 = weight(_text_:standards in 5993) [ClassicSimilarity], result of:
          0.093888626 = score(doc=5993,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.41783947 = fieldWeight in 5993, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=5993)
        0.036792405 = product of:
          0.07358481 = sum of:
            0.07358481 = weight(_text_:organization in 5993) [ClassicSimilarity], result of:
              0.07358481 = score(doc=5993,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.40937364 = fieldWeight in 5993, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5993)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The lack of standardization in the production, organization and dissemination of information in documentation centers and institutions alike, as a result from the digitization of collections and their availability on the internet has called for integration efforts. The sheer availability of multimedia content has fostered the development of many distinct and, most of the time, independent metadata standards for its description. This study aims at presenting and comparing the existing standards of metadata, vocabularies and ontologies for multimedia annotation and also tries to offer a synthetic overview of its main strengths and weaknesses, aiding efforts for semantic integration and enhancing the findability of available multimedia resources on the web. We also aim at unveiling the characteristics that could, should and are perhaps not being highlighted in the characterization of multimedia resources.
    Source
    Knowledge organization. 47(2020) no.4, S.300-319
  4. Melgar Estrada, L.M.: Topic maps from a knowledge organization perspective (2011) 0.09
    0.09257929 = product of:
      0.12343906 = sum of:
        0.014565565 = weight(_text_:information in 4298) [ClassicSimilarity], result of:
          0.014565565 = score(doc=4298,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 4298, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4298)
        0.066389285 = weight(_text_:standards in 4298) [ClassicSimilarity], result of:
          0.066389285 = score(doc=4298,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 4298, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=4298)
        0.042484205 = product of:
          0.08496841 = sum of:
            0.08496841 = weight(_text_:organization in 4298) [ClassicSimilarity], result of:
              0.08496841 = score(doc=4298,freq=8.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.47270393 = fieldWeight in 4298, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4298)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This article comprises a literature review and conceptual analysis of Topic Maps-the ISO standard for representing information about the structure of information resources-according to the principles of Knowledge Organization (KO). Using the main principles from this discipline, the study shows how Topic Maps is proposed as an ontology model independent of technology. Topic Maps constitutes a 'bibliographic' meta-language able to represent, extend, and integrate almost all existing Knowledge Organization Systems (KOS) in a standards-based generic model applicable to digital content and to the Web. This report also presents an inventory of the current applications of Topic Maps in Libraries, Archives, and Museums (LAM), as well as in the Digital Humanities. Finally, some directions for further research are suggested, which relate Topic Maps to the main research trends in KO.
    Source
    Knowledge organization. 38(2011) no.1, S.43-61
  5. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.08
    0.07512338 = product of:
      0.15024675 = sum of:
        0.035678204 = weight(_text_:information in 987) [ClassicSimilarity], result of:
          0.035678204 = score(doc=987,freq=24.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.40312737 = fieldWeight in 987, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
        0.11456855 = sum of:
          0.07358481 = weight(_text_:organization in 987) [ClassicSimilarity], result of:
            0.07358481 = score(doc=987,freq=6.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.40937364 = fieldWeight in 987, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.04098374 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.04098374 = score(doc=987,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.5 = coord(2/4)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Content
    Introduction: envisioning semantic information spacesIndexing and knowledge organization -- Semantic technologies for knowledge representation -- Information retrieval and knowledge exploration -- Approaches to handle heterogeneity -- Problems with establishing semantic interoperability -- Formalization in indexing languages -- Typification of semantic relations -- Inferences in retrieval processes -- Semantic interoperability and inferences -- Remaining research questions.
    Date
    23. 7.2017 13:49:22
    LCSH
    Information retrieval
    Knowledge representation (Information theory)
    Information organization
    RSWK
    Information Retrieval
    Subject
    Information retrieval
    Knowledge representation (Information theory)
    Information organization
    Information Retrieval
  6. ISO/DIS 5127: Information and documentation - foundation and vocabulary (2013) 0.07
    0.068198666 = product of:
      0.13639733 = sum of:
        0.025748521 = weight(_text_:information in 6070) [ClassicSimilarity], result of:
          0.025748521 = score(doc=6070,freq=18.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2909321 = fieldWeight in 6070, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6070)
        0.1106488 = weight(_text_:standards in 6070) [ClassicSimilarity], result of:
          0.1106488 = score(doc=6070,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 6070, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6070)
      0.5 = coord(2/4)
    
    Abstract
    This standard provides the basic terms and their definitions in the field of information and documentation for the purpose of promoting and facilitating knowledge sharing and information exchange. This International Standard presents terms and definitions of selected concepts relevant to the field of information and documentation. If a definition is from other standards, the priority of selection is TC46 technical standards, then technical standards in relevant field, and then terminology related standards. The scope of this International Standard corresponds to that of ISO/TC46, Standardization of practices relating to libraries, documentation and information centres, publishing, archives, records management, museum documentation, indexing and abstracting services, and information science. ISO 5127 was prepared by Technical Committee ISO/TC 46, Information and Documentation, WG4, Terminology of information and documentation. This second edition cancels and replaces the first edition (ISO 5127:2001), which has been technically revised to overcome problems in the practical application of ISO 5127:2001 and to take account of the new developments in the field of information and documentation.
  7. Kless, D.: Erstellung eines allgemeinen Standards zur Wissensorganisation : Nutzen, Möglichkeiten, Herausforderungen, Wege (2010) 0.06
    0.06386268 = product of:
      0.12772536 = sum of:
        0.1106488 = weight(_text_:standards in 4422) [ClassicSimilarity], result of:
          0.1106488 = score(doc=4422,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 4422, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4422)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 4422) [ClassicSimilarity], result of:
              0.03415312 = score(doc=4422,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 4422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4422)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Zur Organisation und zum besseren Auffinden von Wissen werden häufig verschiedene Typen von Vokabularen verwendet. Aufgrund ihres Ursprungs in unterschiedlichen Communities werden die Vokabulare mit unterschiedlicher Terminologie sowie jeweils eigenen Methoden und Werkzeugen beschrieben und sind, wenn überhaupt, unterschiedlich stark und mit unterschiedlichem Fokus standardisiert. Um dieser Entwicklung zu entgegnen, müssen zum einen die Standards für die verschiedenen Vokabulartypen (weiter-)entwickelt werden und dabei auf gemeinsame, heute allgemein anerkannte Modellierungssprachen (z.B. UML) und XML-basierte Auszeichnungssprachen zurückgreifen. Zum anderen ist ein Meta-Standard nötig, der die Terminologie der verschiedenen Communities aufeinander abbildet und die Vokabulare vergleichbar macht. Dies würde nicht nur die qualifizierte Auswahl eines Vokabulartyps ermöglichen, sondern auch deren gegenseitiges Abbilden (Mappen) und allgemein der Wiederverwendung von Vokabularen nutzen. In Ansätzen wurde diese Strategie im jüngst veröffentlichten britischen Standard BS 8723 verfolgt, dessen Schwerpunkt (weiter) auf Thesauri liegt, der jedoch auch explizit Bezug zu anderen Vokabularen nimmt. Die im April 2007 begonnene Revision des Standards als internationale ISO-Norm 25964 erlaubt weitere, wenn auch vielleicht kleine Schritte hin zu einer langfristigen Vision von allgemeingültigen Standards zur Wissensorganisation.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  8. Semantic digital libraries (2009) 0.05
    0.05273524 = product of:
      0.070313655 = sum of:
        0.011892734 = weight(_text_:information in 3371) [ClassicSimilarity], result of:
          0.011892734 = score(doc=3371,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1343758 = fieldWeight in 3371, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3371)
        0.044259522 = weight(_text_:standards in 3371) [ClassicSimilarity], result of:
          0.044259522 = score(doc=3371,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 3371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=3371)
        0.014161401 = product of:
          0.028322803 = sum of:
            0.028322803 = weight(_text_:organization in 3371) [ClassicSimilarity], result of:
              0.028322803 = score(doc=3371,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.15756798 = fieldWeight in 3371, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3371)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Libraries have always been an inspiration for the standards and technologies developed by semantic web activities. However, except for the Dublin Core specification, semantic web and social networking technologies have not been widely adopted and further developed by major digital library initiatives and projects. Yet semantic technologies offer a new level of flexibility, interoperability, and relationships for digital repositories. Kruk and McDaniel present semantic web-related aspects of current digital library activities, and introduce their functionality; they show examples ranging from general architectural descriptions to detailed usages of specific ontologies, and thus stimulate the awareness of researchers, engineers, and potential users of those technologies. Their presentation is completed by chapters on existing prototype systems such as JeromeDL, BRICKS, and Greenstone, as well as a look into the possible future of semantic digital libraries. This book is aimed at researchers and graduate students in areas like digital libraries, the semantic web, social networks, and information retrieval. This audience will benefit from detailed descriptions of both today's possibilities and also the shortcomings of applying semantic web technologies to large digital repositories of often unstructured data.
    Content
    Inhalt: Introduction to Digital Libraries and Semantic Web: Introduction / Bill McDaniel and Sebastian Ryszard Kruk - Digital Libraries and Knowledge Organization / Dagobert Soergel - Semantic Web and Ontologies / Marcin Synak, Maciej Dabrowski and Sebastian Ryszard Kruk - Social Semantic Information Spaces / John G. Breslin A Vision of Semantic Digital Libraries: Goals of Semantic Digital Libraries / Sebastian Ryszard Kruk and Bill McDaniel - Architecture of Semantic Digital Libraries / Sebastian Ryszard Kruk, Adam Westerki and Ewelina Kruk - Long-time Preservation / Markus Reis Ontologies for Semantic Digital Libraries: Bibliographic Ontology / Maciej Dabrowski, Macin Synak and Sebastian Ryszard Kruk - Community-aware Ontologies / Slawomir Grzonkowski, Sebastian Ryszard Kruk, Adam Gzella, Jakub Demczuk and Bill McDaniel Prototypes of Semantic Digital Libraries: JeromeDL: The Social Semantic Digital Library / Sebastian Ryszard Kruk, Mariusz Cygan, Adam Gzella, Tomasz Woroniecki and Maciej Dabrowski - The BRICKS Digital Library Infrastructure / Bernhard Haslhofer and Predrag Knezevié - Semantics in Greenstone / Annika Hinze, George Buchanan, David Bainbridge and Ian Witten Building the Future - Semantic Digital Libraries in Use: Hyperbooks / Gilles Falquet, Luka Nerima and Jean-Claude Ziswiler - Semantic Digital Libraries for Archiving / Bill McDaniel - Evaluation of Semantic and Social Technologies for Digital Libraries / Sebastian Ryszard Kruk, Ewelina Kruk and Katarzyna Stankiewicz - Conclusions: The Future of Semantic Digital Libraries / Sebastian Ryszard Kruk and Bill McDaniel
    Theme
    Information Gateway
  9. Fischer, W.; Bauer, B.: Combining ontologies and natural language (2010) 0.05
    0.05125823 = product of:
      0.10251646 = sum of:
        0.02427594 = weight(_text_:information in 3740) [ClassicSimilarity], result of:
          0.02427594 = score(doc=3740,freq=16.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 3740, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3740)
        0.07824052 = weight(_text_:standards in 3740) [ClassicSimilarity], result of:
          0.07824052 = score(doc=3740,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34819958 = fieldWeight in 3740, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3740)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies are a popular concept for capturing semantic knowledge of the world in a computer understandable way. Todays ontological standards have been designed with primarily the logical formalisms in mind and therefore leaving the linguistic information aside. However knowledge is rarely just about the semantic information itself. In order to create and modify existing ontologies users have to be able to understand the information represented by them. Other problem domains (e.g. Natural Language Processing, NLP) can build on ontological information however a bridge to syntactic information is missing. Therefore in this paper we argue that the possibilities of todays standards like OWL, RDF, etc. are not enough to provide a sound combination of syntax and semantics. Therefore we present an approach for the linguistic enrichment of ontologies inspired by cognitive linguistics. The goal is to provide a generic, language independent approach on modelling semantics which can be annotated with arbitrary linguistic information. This knowledge can then be used for a better documentation of ontologies as well as for NLP and other Information Extraction (IE) related tasks.
    Footnote
    Preprint. To be published as Vol 122 in the Conferences in Research and Practice in Information Technology Series by the Australian Computer Society Inc. http://crpit.com/.
  10. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.05
    0.051098473 = product of:
      0.0681313 = sum of:
        0.009710376 = weight(_text_:information in 311) [ClassicSimilarity], result of:
          0.009710376 = score(doc=311,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.044259522 = weight(_text_:standards in 311) [ClassicSimilarity], result of:
          0.044259522 = score(doc=311,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.014161401 = product of:
          0.028322803 = sum of:
            0.028322803 = weight(_text_:organization in 311) [ClassicSimilarity], result of:
              0.028322803 = score(doc=311,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.15756798 = fieldWeight in 311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=311)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
    Methodology Based on information from research papers, more research was done on SKOS and examples of SKOS and shared faceted classifications in the Semantic Web and about SKOS and how to express SKOS in RDF/XML. Once confident with these ideas, the author used a faceted taxonomy created in a Vocabulary Design class and encoded it using SKOS. Instead of writing RDF in a program such as Notepad, a thesaurus tool was used to create the taxonomy according to SKOS standards and then export the thesaurus in RDF/XML format. These processes and tools are then analyzed. Results The initial statement of the problem was simply an extension of the survey paper done earlier in this class. To continue on with the research, more research was done into SKOS - a standard for expressing thesauri, taxonomies and faceted classifications so they can be shared on the semantic web.
  11. Garshol, L.M.: Living with topic maps and RDF : Topic maps, RDF, DAML, OIL, OWL, TMCL (2003) 0.05
    0.049133226 = product of:
      0.09826645 = sum of:
        0.020812286 = weight(_text_:information in 3886) [ClassicSimilarity], result of:
          0.020812286 = score(doc=3886,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23515764 = fieldWeight in 3886, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3886)
        0.077454165 = weight(_text_:standards in 3886) [ClassicSimilarity], result of:
          0.077454165 = score(doc=3886,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34469998 = fieldWeight in 3886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3886)
      0.5 = coord(2/4)
    
    Abstract
    This paper is about the relationship between the topic map and RDF standards families. It compares the two technologies and looks at ways to make it easier for users to live in a world where both technologies are used. This is done by looking at how to convert information back and forth between the two technologies, how to convert schema information, and how to do queries across both information representations. Ways to achieve all of these goals are presented. This paper extends and improves on earlier work on the same subject, described in [Garshol01b]. This paper was first published in the proceedings of XML Europe 2003, 5-8 May 2003, organized by IDEAlliance, London, UK.
  12. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.05
    0.04769266 = product of:
      0.09538532 = sum of:
        0.006866273 = weight(_text_:information in 4796) [ClassicSimilarity], result of:
          0.006866273 = score(doc=4796,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
        0.088519044 = weight(_text_:standards in 4796) [ClassicSimilarity], result of:
          0.088519044 = score(doc=4796,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.39394283 = fieldWeight in 4796, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
      0.5 = coord(2/4)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  13. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.05
    0.04765854 = product of:
      0.09531708 = sum of:
        0.07824052 = weight(_text_:standards in 4553) [ClassicSimilarity], result of:
          0.07824052 = score(doc=4553,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34819958 = fieldWeight in 4553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.03415312 = score(doc=4553,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  14. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.05
    0.046916284 = product of:
      0.09383257 = sum of:
        0.02427594 = weight(_text_:information in 2831) [ClassicSimilarity], result of:
          0.02427594 = score(doc=2831,freq=16.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 2831, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.06955662 = sum of:
          0.035403505 = weight(_text_:organization in 2831) [ClassicSimilarity], result of:
            0.035403505 = score(doc=2831,freq=2.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.19695997 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
          0.03415312 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
            0.03415312 = score(doc=2831,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.19345059 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
      0.5 = coord(2/4)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
    Source
    Knowledge organization. 43(2016) no.1, S.3-12
  15. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.05
    0.045186404 = product of:
      0.09037281 = sum of:
        0.0800734 = product of:
          0.2402202 = sum of:
            0.2402202 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.2402202 = score(doc=400,freq=2.0), product of:
                0.42742437 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050415643 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.01029941 = weight(_text_:information in 400) [ClassicSimilarity], result of:
          0.01029941 = score(doc=400,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  16. Kruk, S.R.; McDaniel, B.: Goals of semantic digital libraries (2009) 0.04
    0.043494053 = product of:
      0.08698811 = sum of:
        0.02059882 = weight(_text_:information in 3378) [ClassicSimilarity], result of:
          0.02059882 = score(doc=3378,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 3378, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
        0.066389285 = weight(_text_:standards in 3378) [ClassicSimilarity], result of:
          0.066389285 = score(doc=3378,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 3378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
      0.5 = coord(2/4)
    
    Abstract
    Digital libraries have become commodity in the current world of Internet. More and more information is produced, and more and more non-digital information is being rendered available. The new, more user friendly, community-oriented technologies used throughout the Internet are raising the bar of expectations. Digital libraries cannot stand still with their technologies; if not for the sake of handling rapidly growing amount and diversity of information, they must provide for better user experience matching and overgrowing standards set by the industry. The next generation of digital libraries combine technological solutions, such as P2P, SOA, or Grid, with recent research on semantics and social networks. These solutions are put into practice to answer a variety of requirements imposed on digital libraries.
    Theme
    Information Gateway
  17. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.04
    0.043361153 = product of:
      0.08672231 = sum of:
        0.017165681 = weight(_text_:information in 633) [ClassicSimilarity], result of:
          0.017165681 = score(doc=633,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 633, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.06955662 = sum of:
          0.035403505 = weight(_text_:organization in 633) [ClassicSimilarity], result of:
            0.035403505 = score(doc=633,freq=2.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.19695997 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.03415312 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.03415312 = score(doc=633,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.5 = coord(2/4)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
    Source
    Knowledge organization. 39(2012) no.6, S.432-445
  18. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.04
    0.042682633 = product of:
      0.085365266 = sum of:
        0.02277285 = weight(_text_:information in 758) [ClassicSimilarity], result of:
          0.02277285 = score(doc=758,freq=22.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.25731003 = fieldWeight in 758, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.06259242 = weight(_text_:standards in 758) [ClassicSimilarity], result of:
          0.06259242 = score(doc=758,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.27855965 = fieldWeight in 758, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
      0.5 = coord(2/4)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.
  19. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.04
    0.04211062 = product of:
      0.16844247 = sum of:
        0.16844247 = sum of:
          0.100136235 = weight(_text_:organization in 539) [ClassicSimilarity], result of:
            0.100136235 = score(doc=539,freq=4.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.55708694 = fieldWeight in 539, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
          0.06830624 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
            0.06830624 = score(doc=539,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.38690117 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
      0.25 = coord(1/4)
    
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
    Date
    26.12.2011 13:22:07
  20. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.04
    0.040477425 = product of:
      0.08095485 = sum of:
        0.014565565 = weight(_text_:information in 2186) [ClassicSimilarity], result of:
          0.014565565 = score(doc=2186,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 2186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.066389285 = weight(_text_:standards in 2186) [ClassicSimilarity], result of:
          0.066389285 = score(doc=2186,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
      0.5 = coord(2/4)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.

Authors

Years

Languages

  • e 372
  • d 65
  • pt 4
  • f 1
  • sp 1
  • More… Less…

Types

  • a 328
  • el 100
  • m 32
  • x 25
  • s 13
  • n 9
  • r 6
  • p 3
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications