Search (116 results, page 1 of 6)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.19
    0.19375049 = product of:
      0.45208445 = sum of:
        0.064583495 = product of:
          0.19375047 = sum of:
            0.19375047 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.19375047 = score(doc=400,freq=2.0), product of:
                0.34474066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04066292 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.19375047 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19375047 = score(doc=400,freq=2.0), product of:
            0.34474066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04066292 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.19375047 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19375047 = score(doc=400,freq=2.0), product of:
            0.34474066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04066292 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.42857143 = coord(3/7)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.18
    0.17502645 = product of:
      0.40839505 = sum of:
        0.04305566 = product of:
          0.12916698 = sum of:
            0.12916698 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12916698 = score(doc=5820,freq=2.0), product of:
                0.34474066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04066292 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.1826697 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.1826697 = score(doc=5820,freq=4.0), product of:
            0.34474066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04066292 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.1826697 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.1826697 = score(doc=5820,freq=4.0), product of:
            0.34474066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04066292 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.42857143 = coord(3/7)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.13
    0.12916699 = product of:
      0.30138963 = sum of:
        0.04305566 = product of:
          0.12916698 = sum of:
            0.12916698 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12916698 = score(doc=701,freq=2.0), product of:
                0.34474066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04066292 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12916698 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12916698 = score(doc=701,freq=2.0), product of:
            0.34474066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04066292 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12916698 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12916698 = score(doc=701,freq=2.0), product of:
            0.34474066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04066292 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.42857143 = coord(3/7)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Si, L.; Zhou, J.: Ontology and linked data of Chinese great sites information resources from users' perspective (2022) 0.03
    0.026309198 = product of:
      0.18416438 = sum of:
        0.18416438 = weight(_text_:sites in 1115) [ClassicSimilarity], result of:
          0.18416438 = score(doc=1115,freq=18.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.86636657 = fieldWeight in 1115, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
      0.14285715 = coord(1/7)
    
    Abstract
    Great Sites are closely related to the residents' life, urban and rural development. In the process of rapid urbanization in China, the protection and utilization of Great Sites are facing unprecedented pressure. Effective knowl­edge organization with ontology and linked data of Great Sites is a prerequisite for their protection and utilization. In this paper, an interview is conducted to understand the users' awareness towards Great Sites to build the user-centered ontology. As for designing the Great Site ontology, firstly, the scope of Great Sites is determined. Secondly, CIDOC- CRM and OWL-Time Ontology are reused combining the results of literature research and user interviews. Thirdly, the top-level structure and the specific instances are determined to extract knowl­edge concepts of Great Sites. Fourthly, they are transformed into classes, data properties and object properties of the Great Site ontology. Later, based on the linked data technology, taking the Great Sites in Xi'an Area as an example, this paper uses D2RQ to publish the linked data set of the knowl­edge of the Great Sites and realize its opening and sharing. Semantic services such as semantic annotation, semantic retrieval and reasoning are provided based on the ontology.
  5. Crystal, D.: Semantic targeting : past, present, and future (2010) 0.02
    0.02473638 = product of:
      0.086577326 = sum of:
        0.070699565 = weight(_text_:united in 3938) [ClassicSimilarity], result of:
          0.070699565 = score(doc=3938,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.30991787 = fieldWeight in 3938, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3938)
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 3938) [ClassicSimilarity], result of:
              0.03175552 = score(doc=3938,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 3938, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3938)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - This paper seeks to explicate the notion of "semantics", especially as it is being used in the context of the internet in general and advertising in particular. Design/methodology/approach - The conception of semantics as it evolved within linguistics is placed in its historical context. In the field of online advertising, it shows the limitations of keyword-based approaches and those where a limited amount of context is taken into account (contextual advertising). A more sophisticated notion of semantic targeting is explained, in which the whole page is taken into account in arriving at a semantic categorization. This is achieved through a combination of lexicological analysis and a purpose-built semantic taxonomy. Findings - The combination of a lexical analysis (derived from a dictionary) and a taxonomy (derived from a general encyclopedia, and subsequently refined) resulted in the construction of a "sense engine", which was then applied to online advertising, Examples of the application illustrate how relevance and sensitivity (brand protection) of ad placement can be improved. Several areas of potential further application are outlined. Originality/value - This is the first systematic application of linguistics to provide a solution to the problem of inappropriate ad placement online.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  6. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.02
    0.019789103 = product of:
      0.06926186 = sum of:
        0.056559652 = weight(_text_:united in 3948) [ClassicSimilarity], result of:
          0.056559652 = score(doc=3948,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.2479343 = fieldWeight in 3948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.03125 = fieldNorm(doc=3948)
        0.012702207 = product of:
          0.025404414 = sum of:
            0.025404414 = weight(_text_:design in 3948) [ClassicSimilarity], result of:
              0.025404414 = score(doc=3948,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.16616434 = fieldWeight in 3948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3948)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  7. Ibekwe-SanJuan, F.: Semantic metadata annotation : tagging Medline abstracts for enhanced information access (2010) 0.02
    0.019789103 = product of:
      0.06926186 = sum of:
        0.056559652 = weight(_text_:united in 3949) [ClassicSimilarity], result of:
          0.056559652 = score(doc=3949,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.2479343 = fieldWeight in 3949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.03125 = fieldNorm(doc=3949)
        0.012702207 = product of:
          0.025404414 = sum of:
            0.025404414 = weight(_text_:design in 3949) [ClassicSimilarity], result of:
              0.025404414 = score(doc=3949,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.16616434 = fieldWeight in 3949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3949)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - The object of this study is to develop methods for automatically annotating the argumentative role of sentences in scientific abstracts. Working from Medline abstracts, sentences were classified into four major argumentative roles: objective, method, result, and conclusion. The idea is that, if the role of each sentence can be marked up, then these metadata can be used during information retrieval to seek particular types of information such as novelty, conclusions, methodologies, aims/goals of a scientific piece of work. Design/methodology/approach - Two approaches were tested: linguistic cues and positional heuristics. Linguistic cues are lexico-syntactic patterns modelled as regular expressions implemented in a linguistic parser. Positional heuristics make use of the relative position of a sentence in the abstract to deduce its argumentative class. Findings - The experiments showed that positional heuristics attained a much higher degree of accuracy on Medline abstracts with an F-score of 64 per cent, whereas the linguistic cues only attained an F-score of 12 per cent. This is mostly because sentences from different argumentative roles are not always announced by surface linguistic cues. Research limitations/implications - A limitation to the study was the inability to test other methods to perform this task such as machine learning techniques which have been reported to perform better on Medline abstracts. Also, to compare the results of the study with earlier studies using Medline abstracts, the different argumentative roles present in Medline had to be mapped on to four major argumentative roles. This may have favourably biased the performance of the sentence classification by positional heuristics. Originality/value - To the best of one's knowledge, this study presents the first instance of evaluating linguistic cues and positional heuristics on the same corpus.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  8. Boteram, F.: "Content architecture" : semantic interoperability in an international comprehensive knowledge organisation system (2010) 0.02
    0.019789103 = product of:
      0.06926186 = sum of:
        0.056559652 = weight(_text_:united in 647) [ClassicSimilarity], result of:
          0.056559652 = score(doc=647,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.2479343 = fieldWeight in 647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.03125 = fieldNorm(doc=647)
        0.012702207 = product of:
          0.025404414 = sum of:
            0.025404414 = weight(_text_:design in 647) [ClassicSimilarity], result of:
              0.025404414 = score(doc=647,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.16616434 = fieldWeight in 647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.03125 = fieldNorm(doc=647)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - This paper seeks to develop a specified typology of various levels of semantic interoperability, designed to provide semantically expressive and functional means to interconnect typologically different sub-systems in an international comprehensive knowledge organization system, supporting advanced information retrieval and exploration strategies. Design/methodology/approach - Taking the analysis of rudimentary forms of a functional interoperability based on simple pattern matching as a starting-point, more refined strategies to provide semantic interoperability, which is actually reaching the conceptual and even thematic level, are being developed. The paper also examines the potential benefits and perspectives of the selective transfer of modelling strategies from the field of semantic technologies for the refinement of relational structures of inter-system and inter-concept relations as a requirement for expressive and functional indexing languages supporting advanced types of semantic interoperability. Findings - As the principles and strategies of advanced information retrieval systems largely depend on semantic information, new concepts and strategies to achieve semantic interoperability have to be developed. Research limitations/implications - The approach has been developed in the functional and structural context of an international comprehensive system integrating several heterogeneous knowledge organization systems and indexing languages by interconnecting them to a central conceptual structure operating as a spine in an overall system designed to support retrieval and exploration of bibliographic records representing complex conceptual entities. Originality/value - Research and development aimed at providing technical and structural interoperability has to be complemented by a thorough and precise reflection and definition of various degrees and types of interoperability on the semantic level as well. The approach specifies these levels and reflects the implications and their potential for advanced strategies of retrieval and exploration.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO).
  9. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.02
    0.016727384 = product of:
      0.11709168 = sum of:
        0.11709168 = sum of:
          0.09505462 = weight(_text_:design in 179) [ClassicSimilarity], result of:
            0.09505462 = score(doc=179,freq=28.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.62173 = fieldWeight in 179, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.022037057 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.022037057 = score(doc=179,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  10. Garshol, L.M.: Metadata? Thesauri? Taxonomies? Topic Maps! : making sense of it all (2005) 0.01
    0.01488273 = product of:
      0.10417911 = sum of:
        0.10417911 = weight(_text_:sites in 4729) [ClassicSimilarity], result of:
          0.10417911 = score(doc=4729,freq=4.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.49009097 = fieldWeight in 4729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
      0.14285715 = coord(1/7)
    
    Abstract
    The task of an information architect is to create web sites where users can actually find the information they are looking for. As the ocean of information rises and leaves what we seek ever more deeply buried in what we don't seek, this discipline becomes ever more relevant. Information architecture involves many different aspects of web site creation and organization, but its principal tools are information organization techniques developed in other disciplines. Most of these techniques come from library science, such as thesauri, taxonomies, and faceted classification. Topic maps are a relative newcomer to this area and bring with them the promise of better-organized web sites, compared to what is possible with existing techniques. However, it is not generally understood how topic maps relate to the traditional techniques, and what advantages and disadvantages they have, compared to these techniques. The aim of this paper is to help build a better understanding of these issues.
  11. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.014376939 = product of:
      0.10063857 = sum of:
        0.10063857 = sum of:
          0.053890903 = weight(_text_:design in 3355) [ClassicSimilarity], result of:
            0.053890903 = score(doc=3355,freq=4.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.3524878 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.046747662 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.046747662 = score(doc=3355,freq=4.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.14285715 = coord(1/7)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    LCSH
    Graph design
    Subject
    Graph design
  12. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.01
    0.012420927 = product of:
      0.08694649 = sum of:
        0.08694649 = sum of:
          0.053890903 = weight(_text_:design in 4820) [ClassicSimilarity], result of:
            0.053890903 = score(doc=4820,freq=4.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.3524878 = fieldWeight in 4820, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.033055585 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.033055585 = score(doc=4820,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.14285715 = coord(1/7)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  13. Priss, U.: Description logic and faceted knowledge representation (1999) 0.01
    0.012420927 = product of:
      0.08694649 = sum of:
        0.08694649 = sum of:
          0.053890903 = weight(_text_:design in 2655) [ClassicSimilarity], result of:
            0.053890903 = score(doc=2655,freq=4.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.3524878 = fieldWeight in 2655, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.033055585 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.033055585 = score(doc=2655,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.14285715 = coord(1/7)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  14. Seidlmayer, E.: ¬An ontology of digital objects in philosophy : an approach for practical use in research (2018) 0.01
    0.012277626 = product of:
      0.08594338 = sum of:
        0.08594338 = weight(_text_:sites in 5496) [ClassicSimilarity], result of:
          0.08594338 = score(doc=5496,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.40430441 = fieldWeight in 5496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5496)
      0.14285715 = coord(1/7)
    
    Abstract
    The digitalization of research enables new scientific insights and methods, especially in the humanities. Nonetheless, electronic book editions, encyclopedias, mobile applications or web sites presenting research projects are not in broad use in academic philosophy. This is contradictory to the large amount of helpful tools facilitating research also bearing new scientific subjects and approaches. A possible solution to this dilemma is the systematization and promotion of these tools in order to improve their accessibility and fully exploit the potential of digitalization for philosophy.
  15. Giunchiglia, F.; Dutta, B.; Maltese, V.: From knowledge organization to knowledge representation (2014) 0.01
    0.010099938 = product of:
      0.070699565 = sum of:
        0.070699565 = weight(_text_:united in 1369) [ClassicSimilarity], result of:
          0.070699565 = score(doc=1369,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.30991787 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
      0.14285715 = coord(1/7)
    
    Content
    Papers from the ISKO-UK Biennial Conference, "Knowledge Organization: Pushing the Boundaries," United Kingdom, 8-9 July, 2013, London.
  16. Amirhosseini, M.: Theoretical base of quantitative evaluation of unity in a thesaurus term network based on Kant's epistemology (2010) 0.01
    0.009730568 = product of:
      0.068113975 = sum of:
        0.068113975 = weight(_text_:states in 5854) [ClassicSimilarity], result of:
          0.068113975 = score(doc=5854,freq=2.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.304198 = fieldWeight in 5854, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5854)
      0.14285715 = coord(1/7)
    
    Abstract
    The quantitative evaluation of thesauri has been carried out much further since 1976. This type of evaluation is based on counting of special factors in thesaurus structure, some of which are counting preferred terms, non preferred terms, cross reference terms and so on. Therefore, various statistical tests have been proposed and applied for evaluation of thesauri. In this article, we try to explain some ratios in the field of unity quantitative evaluation in a thesaurus term network. Theoretical base of the ratios' indicators and indices construction, and epistemological thought in this type of quantitative evaluation, are discussed in this article. The theoretical base of quantitative evaluation is the epistemological thought of Immanuel Kant's Critique of pure reason. The cognition states of transcendental understanding are divided into three steps, the first is perception, the second combination and the third, relation making. Terms relation domains and conceptual relation domains can be analyzed with ratios. The use of quantitative evaluations in current research in the field of thesaurus construction prepares a basis for a restoration period. In modern thesaurus construction, traditional term relations are analyzed in detail in the form of new conceptual relations. Hence, the new domains of hierarchical and associative relations are constructed in the form of relations between concepts. The newly formed conceptual domains can be a suitable basis for quantitative evaluation analysis in conceptual relations.
  17. Branch, F.; Arias, T.; Kennah, J.; Phillips, R.; Windleharth, T.; Lee, J.H.: Representing transmedia fictional worlds through ontology (2017) 0.01
    0.008769733 = product of:
      0.061388128 = sum of:
        0.061388128 = weight(_text_:sites in 3958) [ClassicSimilarity], result of:
          0.061388128 = score(doc=3958,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.28878886 = fieldWeight in 3958, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3958)
      0.14285715 = coord(1/7)
    
    Abstract
    Currently, there is no structured data standard for representing elements commonly found in transmedia fictional worlds. Although there are websites dedicated to individual universes, the information found on these sites separate out the various formats, concentrate on only the bibliographic aspects of the material, and are only searchable with full text. We have created an ontological model that will allow various user groups interested in transmedia to search for and retrieve the information contained in these worlds based upon their structure. We conducted a domain analysis and user studies based on the contents of Harry Potter, Lord of the Rings, the Marvel Universe, and Star Wars in order to build a new model using Ontology Web Language (OWL) and an artificial intelligence-reasoning engine. This model can infer connections between transmedia properties such as characters, elements of power, items, places, events, and so on. This model will facilitate better search and retrieval of the information contained within these vast story universes for all users interested in them. The result of this project is an OWL ontology reflecting real user needs based upon user research, which is intuitive for users and can be used by artificial intelligence systems.
  18. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.01
    0.008471692 = product of:
      0.05930184 = sum of:
        0.05930184 = sum of:
          0.03175552 = weight(_text_:design in 2589) [ClassicSimilarity], result of:
            0.03175552 = score(doc=2589,freq=2.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.20770542 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
          0.027546322 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
            0.027546322 = score(doc=2589,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
  19. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.01
    0.008471692 = product of:
      0.05930184 = sum of:
        0.05930184 = sum of:
          0.03175552 = weight(_text_:design in 106) [ClassicSimilarity], result of:
            0.03175552 = score(doc=106,freq=2.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.20770542 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
          0.027546322 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
            0.027546322 = score(doc=106,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  20. Lange, C.: Ontologies and languages for representing mathematical knowledge on the Semantic Web (2011) 0.01
    0.007015786 = product of:
      0.049110502 = sum of:
        0.049110502 = weight(_text_:sites in 135) [ClassicSimilarity], result of:
          0.049110502 = score(doc=135,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.23103109 = fieldWeight in 135, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.03125 = fieldNorm(doc=135)
      0.14285715 = coord(1/7)
    
    Content
    Vgl.: http://www.semantic-web-journal.net/content/ontologies-and-languages-representing-mathematical-knowledge-semantic-web http://www.semantic-web-journal.net/sites/default/files/swj122_2.pdf.

Authors

Years

Languages

  • e 102
  • d 12
  • pt 1
  • More… Less…

Types

  • a 86
  • el 26
  • x 9
  • m 5
  • n 2
  • r 1
  • s 1
  • More… Less…