Search (549 results, page 1 of 28)

  • × theme_ss:"Wissensrepräsentation"
  1. Almeida, M.B.: Ontologia em Ciência da Informação: Teoria e Método (1ª ed., Vol. 1). CRV. http://dx.doi.org/10.24824/978655578679.8; Tecnologia e Aplicações (1ª ed., Vol. 2). CRV. http://dx.doi.org/10.24824/978652511477.4; Curso completo com teoria e exercícios (1ª ed., volume suplementar para professores). CRV. [Review] (2022) 0.08
    0.0816775 = product of:
      0.163355 = sum of:
        0.15959246 = weight(_text_:da in 631) [ClassicSimilarity], result of:
          0.15959246 = score(doc=631,freq=12.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.7791261 = fieldWeight in 631, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=631)
        0.0037625222 = product of:
          0.0075250445 = sum of:
            0.0075250445 = weight(_text_:a in 631) [ClassicSimilarity], result of:
              0.0075250445 = score(doc=631,freq=8.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.15287387 = fieldWeight in 631, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=631)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Nos últimos 30 anos, o tema das ontologias tem sido um dos terrenos mais férteis de investigação na área da Organização do Conhecimento. É um tema complexo e revestido de polémica, pela dificuldade na definição do próprio conceito e pelas apropriações que diferentes campos científicos têm exercido sobre ele. Com origem no domínio da filosofia, a ontologia é hoje um território partilhado pelas Ciências da Computação, com destaque para a Ciência dos Dados (Data Science), e pela Ciência da Informação, particularmente pela Organização do Conhecimento. São raros os autores desta área que não escreveram sobre o tema, abordando as suas fronteiras conceptuais ou discutindo a relação das ontologias com outros sistemas de organização do conhecimento, como as taxonomias, os tesauros ou as classificações.
    Source
    Boletim do Arquivo da Universidade de Coimbra 35(2022) no.1, S.191-198
    Type
    a
  2. Simões, M. da Graça; Machado, L.M.; Souza, R.R.; Almeida, M.B.; Tavares Lopes, A.: Automatic indexing and ontologies : the consistency of research chronology and authoring in the context of Information Science (2018) 0.07
    0.0673805 = product of:
      0.134761 = sum of:
        0.13165708 = weight(_text_:da in 5909) [ClassicSimilarity], result of:
          0.13165708 = score(doc=5909,freq=6.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.64274627 = fieldWeight in 5909, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5909)
        0.0031039226 = product of:
          0.006207845 = sum of:
            0.006207845 = weight(_text_:a in 5909) [ClassicSimilarity], result of:
              0.006207845 = score(doc=5909,freq=4.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.12611452 = fieldWeight in 5909, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5909)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    U.d.T. 'Indexação automática e ontologias: identificação dos contributos convergentes na ciência da informação' in: Ciência da Informação 46(2017) no.1, S.152-162.
    Type
    a
  3. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.05
    0.04916604 = product of:
      0.09833208 = sum of:
        0.063981645 = product of:
          0.12796329 = sum of:
            0.12796329 = weight(_text_:silva in 633) [ClassicSimilarity], result of:
              0.12796329 = score(doc=633,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.40692633 = fieldWeight in 633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=633)
          0.5 = coord(1/2)
        0.034350436 = sum of:
          0.0054307333 = weight(_text_:a in 633) [ClassicSimilarity], result of:
            0.0054307333 = score(doc=633,freq=6.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.11032722 = fieldWeight in 633, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.028919702 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.028919702 = score(doc=633,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.5 = coord(2/4)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
    Type
    a
  4. Oliveira Lima, G.A.B. de: Hypertext model - HTXM : a model for hypertext organization of documents (2008) 0.05
    0.04894044 = product of:
      0.09788088 = sum of:
        0.09404077 = weight(_text_:da in 2504) [ClassicSimilarity], result of:
          0.09404077 = score(doc=2504,freq=6.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.45910448 = fieldWeight in 2504, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2504)
        0.0038401082 = product of:
          0.0076802163 = sum of:
            0.0076802163 = weight(_text_:a in 2504) [ClassicSimilarity], result of:
              0.0076802163 = score(doc=2504,freq=12.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.15602624 = fieldWeight in 2504, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2504)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    This article reports an applied research on the construction and implementation of a semantically structured conceptual prototype to help in the organization and representation of human knowledge in hypertextual systems, based on four references: the Facet Analysis Theory (FAT), the Conceptual Map Theory, semantic structure of hypertext links and the technical guidelines of the Associacao Brasileira de Normas Técnicas (ABNT). This prototype, called Modelo Hipertextual para Organizacao de Documentos (MHTX) - Model For Hypertext Organization of Documents HTXM - is formed by a semantic structure called Conceptual Map (CM) and Expanded Summary (ES), the latter based on the summary of a selected doctoral thesis to which access points were designed. In the future, this prototype maybe used to implement a digital libraty called BTDECI - UFMG (Biblioteca de Teses e Dissertacöes do Programa de Pós-Graduacao da Escola de Ciência da Informacao da UFMG - Library of Theses and Dissertations of the Graduate Program of School of Information Science of Universidade Federal de Minas Gerais).
    Type
    a
  5. Marcondes, C.H.; Costa, L.C da.: ¬A model to represent and process scientific knowledge in biomedical articles with semantic Web technologies (2016) 0.04
    0.044742513 = product of:
      0.08948503 = sum of:
        0.054294456 = weight(_text_:da in 2829) [ClassicSimilarity], result of:
          0.054294456 = score(doc=2829,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.26506406 = fieldWeight in 2829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2829)
        0.03519057 = sum of:
          0.0062708696 = weight(_text_:a in 2829) [ClassicSimilarity], result of:
            0.0062708696 = score(doc=2829,freq=8.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.12739488 = fieldWeight in 2829, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2829)
          0.028919702 = weight(_text_:22 in 2829) [ClassicSimilarity], result of:
            0.028919702 = score(doc=2829,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.19345059 = fieldWeight in 2829, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2829)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge organization faces the challenge of managing the amount of knowledge available on the Web. Published literature in biomedical sciences is a huge source of knowledge, which can only efficiently be managed through automatic methods. The conventional channel for reporting scientific results is Web electronic publishing. Despite its advances, scientific articles are still published in print formats such as portable document format (PDF). Semantic Web and Linked Data technologies provides new opportunities for communicating, sharing, and integrating scientific knowledge that can overcome the limitations of the current print format. Here is proposed a semantic model of scholarly electronic articles in biomedical sciences that can overcome the limitations of traditional flat records formats. Scientific knowledge consists of claims made throughout article texts, especially when semantic elements such as questions, hypotheses and conclusions are stated. These elements, although having different roles, express relationships between phenomena. Once such knowledge units are extracted and represented with technologies such as RDF (Resource Description Framework) and linked data, they may be integrated in reasoning chains. Thereby, the results of scientific research can be published and shared in structured formats, enabling crawling by software agents, semantic retrieval, knowledge reuse, validation of scientific results, and identification of traces of scientific discoveries.
    Date
    12. 3.2016 13:17:22
    Type
    a
  6. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.04
    0.038870484 = product of:
      0.07774097 = sum of:
        0.07523262 = weight(_text_:da in 117) [ClassicSimilarity], result of:
          0.07523262 = score(doc=117,freq=6.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.36728358 = fieldWeight in 117, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
        0.0025083479 = product of:
          0.0050166957 = sum of:
            0.0050166957 = weight(_text_:a in 117) [ClassicSimilarity], result of:
              0.0050166957 = score(doc=117,freq=8.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.10191591 = fieldWeight in 117, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
    Content
    Tese de Doutorado apresentada ao Programa de Pós-Graduação da Universidade Federal Fluminense como requisito parcial para obtenção do título de Doutor em Ciência da Informação.
    Imprint
    Niteroi : UNIVERSIDADE FEDERAL FLUMINENSE / INSTITUTO DE ARTE E COMUNICAÇÃO SOCIAL / DEPARTAMENTO DE CIÊNCIA DA INFORMAÇÃO
  7. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.04
    0.037293218 = product of:
      0.074586436 = sum of:
        0.06780345 = product of:
          0.20341034 = sum of:
            0.20341034 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.20341034 = score(doc=400,freq=2.0), product of:
                0.3619285 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04269026 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.0067829825 = product of:
          0.013565965 = sum of:
            0.013565965 = weight(_text_:a in 400) [ClassicSimilarity], result of:
              0.013565965 = score(doc=400,freq=26.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.27559727 = fieldWeight in 400, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  8. Sigel, A.: Was leisten Topic Maps? (2001) 0.03
    0.033906925 = product of:
      0.06781385 = sum of:
        0.065153345 = weight(_text_:da in 5855) [ClassicSimilarity], result of:
          0.065153345 = score(doc=5855,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 5855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=5855)
        0.0026605048 = product of:
          0.0053210096 = sum of:
            0.0053210096 = weight(_text_:a in 5855) [ClassicSimilarity], result of:
              0.0053210096 = score(doc=5855,freq=4.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.10809815 = fieldWeight in 5855, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5855)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Dieser Kurzbeitrag skizziert das Potenzial der Topic Map-Technologie (ISO/IEC 13250 und XTM 1.0) für die Wissensorganisation und veranschaulicht dies anhand einer Liste fruchtbarer Anwendungsfälle (Use Cases). Er berichtet auch knapp über erste Erfahrungen bei der experimentellen Anwendung. Am Beispiel von Informationsressourcen zur Thematik sozialwissenschaftlicher Migration werden Möglichkeiten und Grenzen von Topic Maps für die inhaltliche Erschließung und semantische Suche aufgezeigt werden. Da es sich um eine terminologisch "weiche" Donnerte handelt, ist von besonderem Interesse, wie sich komplexe Relationen und multiple Indexierungssichten umsetzen lassen und wie sich diese auf das Retrieval-Ergebnis auswirken
    Type
    a
  9. Haas, M.: Methoden der künstlichen Intelligenz in betriebswirtschaftlichen Anwendungen (2006) 0.03
    0.030713584 = product of:
      0.12285434 = sum of:
        0.12285434 = weight(_text_:da in 4499) [ClassicSimilarity], result of:
          0.12285434 = score(doc=4499,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.59977156 = fieldWeight in 4499, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0625 = fieldNorm(doc=4499)
      0.25 = coord(1/4)
    
    Content
    Diplomarbeit zur Erlangung des Grades eines Diplom-Wirtschaftsinformatikers (FH) der Hochschule Wismar. Vgl.: http://www.wi.hs-wismar.de/~cleve/vorl/projects/da/DA-FS-Haas.pdf.
  10. Campos, L.M.: Princípios teóricos usados na elaboracao de ontologias e sua influência na recuperacao da informacao com uso de de inferências [Theoretical principles used in ontology building and their influence on information retrieval using inferences] (2021) 0.03
    0.029221123 = product of:
      0.058442246 = sum of:
        0.054294456 = weight(_text_:da in 826) [ClassicSimilarity], result of:
          0.054294456 = score(doc=826,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.26506406 = fieldWeight in 826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=826)
        0.004147791 = product of:
          0.008295582 = sum of:
            0.008295582 = weight(_text_:a in 826) [ClassicSimilarity], result of:
              0.008295582 = score(doc=826,freq=14.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.1685276 = fieldWeight in 826, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=826)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Several instruments of knowledge organization will reflect different possibilities for information retrieval. In this context, ontologies have a different potential because they allow knowledge discovery, which can be used to retrieve information in a more flexible way. However, this potential can be affected by the theoretical principles adopted in ontology building. The aim of this paper is to discuss, in an introductory way, how a (not exhaustive) set of theoretical principles can influence an aspect of ontologies: their use to obtain inferences. In this context, the role of Ingetraut Dahlberg's Theory of Concept is discussed. The methodology is exploratory, qualitative, and from the technical point of view it uses bibliographic research supported by the content analysis method. It also presents a small example of application as a proof of concept. As results, a discussion about the influence of conceptual definition on subsumption inferences is presented, theoretical contributions are suggested that should be used to guide the formation of hierarchical structures on which such inferences are supported, and examples are provided of how the absence of such contributions can lead to erroneous inferences
    Type
    a
  11. Ehrig, M.; Studer, R.: Wissensvernetzung durch Ontologien (2006) 0.03
    0.028255772 = product of:
      0.056511544 = sum of:
        0.054294456 = weight(_text_:da in 5901) [ClassicSimilarity], result of:
          0.054294456 = score(doc=5901,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.26506406 = fieldWeight in 5901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.0022170874 = product of:
          0.004434175 = sum of:
            0.004434175 = weight(_text_:a in 5901) [ClassicSimilarity], result of:
              0.004434175 = score(doc=5901,freq=4.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.090081796 = fieldWeight in 5901, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5901)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In der Informatik sind Ontologien formale Modelle eines Anwendungsbereiches, die die Kommunikation zwischen menschlichen und/oder maschinellen Akteuren unterstützen und damit den Austausch und das Teilen von Wissen in Unternehmen erleichtern. Ontologien zur strukturierten Darstellung von Wissen zu nutzen hat deshalb in den letzten Jahren zunehmende Verbreitung gefunden. Schon heute existieren weltweit tausende Ontologien. Um Interoperabilität zwischen darauf aufbauenden Softwareagenten oder Webservices zu ermöglichen, ist die semantische Integration der Ontologien eine zwingendnotwendige Vorraussetzung. Wie man sieh leicht verdeutlichen kann, ist die rein manuelle Erstellung der Abbildungen ab einer bestimmten Größe. Komplexität und Veränderungsrate der Ontologien nicht mehr ohne weiteres möglich. Automatische oder semiautomatische Technologien müssen den Nutzer darin unterstützen. Das Integrationsproblem beschäftigt Forschung und Industrie schon seit vielen Jahren z. B. im Bereich der Datenbankintegration. Neu ist jedoch die Möglichkeit komplexe semantische Informationen. wie sie in Ontologien vorhanden sind, einzubeziehen. Zur Ontologieintegration wird in diesem Kapitel ein sechsstufiger genereller Prozess basierend auf den semantischen Strukturen eingeführt. Erweiterungen beschäftigen sich mit der Effizienz oder der optimalen Nutzereinbindung in diesen Prozess. Außerdem werden zwei Anwendungen vorgestellt, in denen dieser Prozess erfolgreich umgesetzt wurde. In einem abschließenden Fazit werden neue aktuelle Trends angesprochen. Da die Ansätze prinzipiell auf jedes Schema übertragbar sind, das eine semantische Basis enthält, geht der Einsatzbereich dieser Forschung weit über reine Ontologieanwendungen hinaus.
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
    Type
    a
  12. Silva, S.E.; Reis, L.P.; Fernandes, J.M.; Sester Pereira, A.D.: ¬A multi-layer framework for semantic modeling (2020) 0.03
    0.02767247 = product of:
      0.05534494 = sum of:
        0.051185314 = product of:
          0.10237063 = sum of:
            0.10237063 = weight(_text_:silva in 5712) [ClassicSimilarity], result of:
              0.10237063 = score(doc=5712,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.32554108 = fieldWeight in 5712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5712)
          0.5 = coord(1/2)
        0.0041596247 = product of:
          0.008319249 = sum of:
            0.008319249 = weight(_text_:a in 5712) [ClassicSimilarity], result of:
              0.008319249 = score(doc=5712,freq=22.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.16900843 = fieldWeight in 5712, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5712)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to introduce a multi-level framework for semantic modeling (MFSM) based on four signification levels: objects, classes of entities, instances and domains. In addition, four fundamental propositions of the signification process underpin these levels, namely, classification, decomposition, instantiation and contextualization. Design/methodology/approach The deductive approach guided the design of this modeling framework. The authors empirically validated the MFSM in two ways. First, the authors identified the signification processes used in articles that deal with semantic modeling. The authors then applied the MFSM to model the semantic context of the literature about lean manufacturing, a field of management science. Findings The MFSM presents a highly consistent approach about the signification process, integrates the semantic modeling literature in a new and comprehensive view; and permits the modeling of any semantic context, thus facilitating the development of knowledge organization systems based on semantic search. Research limitations/implications The use of MFSM is manual and, thus, requires a considerable effort of the team that decides to model a semantic context. In this paper, the modeling was generated by specialists, and in the future should be applicated to lay users. Practical implications The MFSM opens up avenues to a new form of classification of documents, as well as for the development of tools based on the semantic search, and to investigate how users do their searches. Social implications The MFSM can be used to model archives semantically in public or private settings. In future, it can be incorporated to search engines for more efficient searches of users. Originality/value The MFSM provides a new and comprehensive approach about the elementary levels and activities in the process of signification. In addition, this new framework presents a new form to model semantically any context classifying its objects.
    Type
    a
  13. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.025261655 = product of:
      0.05052331 = sum of:
        0.0452023 = product of:
          0.1356069 = sum of:
            0.1356069 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1356069 = score(doc=701,freq=2.0), product of:
                0.3619285 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04269026 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.0053210096 = product of:
          0.010642019 = sum of:
            0.010642019 = weight(_text_:a in 701) [ClassicSimilarity], result of:
              0.010642019 = score(doc=701,freq=36.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.2161963 = fieldWeight in 701, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  14. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.024584174 = product of:
      0.04916835 = sum of:
        0.0452023 = product of:
          0.1356069 = sum of:
            0.1356069 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.1356069 = score(doc=5820,freq=2.0), product of:
                0.3619285 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04269026 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.0039660465 = product of:
          0.007932093 = sum of:
            0.007932093 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
              0.007932093 = score(doc=5820,freq=20.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.16114321 = fieldWeight in 5820, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  15. Stuckenschmidt, H.: Ontologien : Konzepte, Technologien und Anwendungen (2009) 0.02
    0.01900306 = product of:
      0.07601224 = sum of:
        0.07601224 = weight(_text_:da in 37) [ClassicSimilarity], result of:
          0.07601224 = score(doc=37,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.3710897 = fieldWeight in 37, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0546875 = fieldNorm(doc=37)
      0.25 = coord(1/4)
    
    Abstract
    Ontologien haben durch die aktuellen Entwicklungen des Semantic Web große Beachtung erfahren, da jetzt Technologien bereitgestellt werden, die eine Verwendung von Ontologien in Informationssystemen ermöglichen. Beginnend mit den grundlegenden Konzepten und Ideen von Ontologien, die der Philosophie und Linguistik entstammen, stellt das Buch den aktuellen Stand der Technik im Bereich unterstützender Technologien aus der Semantic Web Forschung dar und zeigt vielversprechende Anwendungsbiete auf.
  16. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.016027568 = product of:
      0.06411027 = sum of:
        0.06411027 = sum of:
          0.0062708696 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
            0.0062708696 = score(doc=6089,freq=2.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.12739488 = fieldWeight in 6089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=6089)
          0.057839405 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
            0.057839405 = score(doc=6089,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.38690117 = fieldWeight in 6089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=6089)
      0.25 = coord(1/4)
    
    Pages
    S.11-22
    Type
    a
  17. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.016027568 = product of:
      0.06411027 = sum of:
        0.06411027 = sum of:
          0.0062708696 = weight(_text_:a in 539) [ClassicSimilarity], result of:
            0.0062708696 = score(doc=539,freq=2.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.12739488 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
          0.057839405 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
            0.057839405 = score(doc=539,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.38690117 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
      0.25 = coord(1/4)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  18. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.016027568 = product of:
      0.06411027 = sum of:
        0.06411027 = sum of:
          0.0062708696 = weight(_text_:a in 4523) [ClassicSimilarity], result of:
            0.0062708696 = score(doc=4523,freq=2.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.12739488 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
          0.057839405 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
            0.057839405 = score(doc=4523,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.38690117 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
      0.25 = coord(1/4)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
    Type
    a
  19. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.01
    0.013592187 = product of:
      0.05436875 = sum of:
        0.05436875 = sum of:
          0.013881164 = weight(_text_:a in 3694) [ClassicSimilarity], result of:
            0.013881164 = score(doc=3694,freq=20.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.28200063 = fieldWeight in 3694, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3694)
          0.040487584 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
            0.040487584 = score(doc=3694,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.2708308 = fieldWeight in 3694, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3694)
      0.25 = coord(1/4)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
    Type
    a
  20. Pfeiffer, S.: Entwicklung einer Ontologie für die wissensbasierte Erschließung des ISDC-Repository und die Visualisierung kontextrelevanter semantischer Zusammenhänge (2010) 0.01
    0.013437194 = product of:
      0.053748775 = sum of:
        0.053748775 = weight(_text_:da in 4658) [ClassicSimilarity], result of:
          0.053748775 = score(doc=4658,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.26240006 = fieldWeight in 4658, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4658)
      0.25 = coord(1/4)
    
    Abstract
    In der heutigen Zeit sind Informationen jeglicher Art über das World Wide Web (WWW) für eine breite Bevölkerungsschicht zugänglich. Dabei ist es jedoch schwierig die existierenden Dokumente auch so aufzubereiten, dass die Inhalte für Maschinen inhaltlich interpretierbar sind. Das Semantic Web, eine Weiterentwicklung des WWWs, möchte dies ändern, indem es Webinhalte in maschinenverständlichen Formaten anbietet. Dadurch können Automatisierungsprozesse für die Suchanfragenoptimierung und für die Wissensbasenvernetzung eingesetzt werden. Die Web Ontology Language (OWL) ist eine mögliche Sprache, in der Wissen beschrieben und gespeichert werden kann (siehe Kapitel 4 OWL). Das Softwareprodukt Protégé unterstützt den Standard OWL, weshalb ein Großteil der Modellierungsarbeiten in Protégé durchgeführt wurde. Momentan erhält der Nutzer in den meisten Fällen bei der Informationsfindung im Internet lediglich Unterstützung durch eine von Suchmaschinenbetreibern vorgenommene Verschlagwortung des Dokumentinhaltes, d.h. Dokumente können nur nach einem bestimmten Wort oder einer bestimmten Wortgruppe durchsucht werden. Die Ausgabeliste der Suchergebnisse muss dann durch den Nutzer selbst gesichtet und nach Relevanz geordnet werden. Das kann ein sehr zeit- und arbeitsintensiver Prozess sein. Genau hier kann das Semantic Web einen erheblichen Beitrag in der Informationsaufbereitung für den Nutzer leisten, da die Ausgabe der Suchergebnisse bereits einer semantischen Überprüfung und Verknüpfung unterliegt. Deshalb fallen hier nicht relevante Informationsquellen von vornherein bei der Ausgabe heraus, was das Finden von gesuchten Dokumenten und Informationen in einem bestimmten Wissensbereich beschleunigt.
    Um die Vernetzung von Daten, Informationen und Wissen imWWWzu verbessern, werden verschiedene Ansätze verfolgt. Neben dem Semantic Web mit seinen verschiedenen Ausprägungen gibt es auch andere Ideen und Konzepte, welche die Verknüpfung von Wissen unterstützen. Foren, soziale Netzwerke und Wikis sind eine Möglichkeit des Wissensaustausches. In Wikis wird Wissen in Form von Artikeln gebündelt, um es so einer breiten Masse zur Verfügung zu stellen. Hier angebotene Informationen sollten jedoch kritisch hinterfragt werden, da die Autoren der Artikel in den meisten Fällen keine Verantwortung für die dort veröffentlichten Inhalte übernehmen müssen. Ein anderer Weg Wissen zu vernetzen bietet das Web of Linked Data. Hierbei werden strukturierte Daten des WWWs durch Verweise auf andere Datenquellen miteinander verbunden. Der Nutzer wird so im Zuge der Suche auf themenverwandte und verlinkte Datenquellen verwiesen. Die geowissenschaftlichen Metadaten mit ihren Inhalten und Beziehungen untereinander, die beim GFZ unter anderem im Information System and Data Center (ISDC) gespeichert sind, sollen als Ontologie in dieser Arbeit mit den Sprachkonstrukten von OWL modelliert werden. Diese Ontologie soll die Repräsentation und Suche von ISDC-spezifischem Domänenwissen durch die semantische Vernetzung persistenter ISDC-Metadaten entscheidend verbessern. Die in dieser Arbeit aufgezeigten Modellierungsmöglichkeiten, zunächst mit der Extensible Markup Language (XML) und später mit OWL, bilden die existierenden Metadatenbestände auf einer semantischen Ebene ab (siehe Abbildung 2). Durch die definierte Nutzung der Semantik, die in OWL vorhanden ist, kann mittels Maschinen ein Mehrwert aus den Metadaten gewonnen und dem Nutzer zur Verfügung gestellt werden. Geowissenschaftliche Informationen, Daten und Wissen können in semantische Zusammenhänge gebracht und verständlich repräsentiert werden. Unterstützende Informationen können ebenfalls problemlos in die Ontologie eingebunden werden. Dazu gehören z.B. Bilder zu den im ISDC gespeicherten Instrumenten, Plattformen oder Personen. Suchanfragen bezüglich geowissenschaftlicher Phänomene können auch ohne Expertenwissen über Zusammenhänge und Begriffe gestellt und beantwortet werden. Die Informationsrecherche und -aufbereitung gewinnt an Qualität und nutzt die existierenden Ressourcen im vollen Umfang.

Years

Languages

  • e 439
  • d 94
  • pt 5
  • el 1
  • f 1
  • sp 1
  • More… Less…

Types

  • a 419
  • el 145
  • m 24
  • x 24
  • n 13
  • s 11
  • p 5
  • r 5
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications