Search (114 results, page 1 of 6)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.15
    0.14634888 = product of:
      0.29269776 = sum of:
        0.07317444 = product of:
          0.21952331 = sum of:
            0.21952331 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.21952331 = score(doc=400,freq=2.0), product of:
                0.39059833 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046071928 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.21952331 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.21952331 = score(doc=400,freq=2.0), product of:
            0.39059833 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046071928 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.13
    0.12787576 = product of:
      0.25575152 = sum of:
        0.04878296 = product of:
          0.14634888 = sum of:
            0.14634888 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14634888 = score(doc=5820,freq=2.0), product of:
                0.39059833 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046071928 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.20696856 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.20696856 = score(doc=5820,freq=4.0), product of:
            0.39059833 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046071928 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.09756592 = product of:
      0.19513184 = sum of:
        0.04878296 = product of:
          0.14634888 = sum of:
            0.14634888 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14634888 = score(doc=701,freq=2.0), product of:
                0.39059833 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046071928 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14634888 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14634888 = score(doc=701,freq=2.0), product of:
            0.39059833 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046071928 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Si, L.; Zhou, J.: Ontology and linked data of Chinese great sites information resources from users' perspective (2022) 0.05
    0.052165512 = product of:
      0.20866205 = sum of:
        0.20866205 = weight(_text_:sites in 1115) [ClassicSimilarity], result of:
          0.20866205 = score(doc=1115,freq=18.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.86636657 = fieldWeight in 1115, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
      0.25 = coord(1/4)
    
    Abstract
    Great Sites are closely related to the residents' life, urban and rural development. In the process of rapid urbanization in China, the protection and utilization of Great Sites are facing unprecedented pressure. Effective knowl­edge organization with ontology and linked data of Great Sites is a prerequisite for their protection and utilization. In this paper, an interview is conducted to understand the users' awareness towards Great Sites to build the user-centered ontology. As for designing the Great Site ontology, firstly, the scope of Great Sites is determined. Secondly, CIDOC- CRM and OWL-Time Ontology are reused combining the results of literature research and user interviews. Thirdly, the top-level structure and the specific instances are determined to extract knowl­edge concepts of Great Sites. Fourthly, they are transformed into classes, data properties and object properties of the Great Site ontology. Later, based on the linked data technology, taking the Great Sites in Xi'an Area as an example, this paper uses D2RQ to publish the linked data set of the knowl­edge of the Great Sites and realize its opening and sharing. Semantic services such as semantic annotation, semantic retrieval and reasoning are provided based on the ontology.
  5. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.03
    0.033166822 = product of:
      0.13266729 = sum of:
        0.13266729 = sum of:
          0.10769884 = weight(_text_:design in 179) [ClassicSimilarity], result of:
            0.10769884 = score(doc=179,freq=28.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.62173 = fieldWeight in 179, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.024968442 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.024968442 = score(doc=179,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  6. Garshol, L.M.: Metadata? Thesauri? Taxonomies? Topic Maps! : making sense of it all (2005) 0.03
    0.029509272 = product of:
      0.11803709 = sum of:
        0.11803709 = weight(_text_:sites in 4729) [ClassicSimilarity], result of:
          0.11803709 = score(doc=4729,freq=4.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.49009097 = fieldWeight in 4729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
      0.25 = coord(1/4)
    
    Abstract
    The task of an information architect is to create web sites where users can actually find the information they are looking for. As the ocean of information rises and leaves what we seek ever more deeply buried in what we don't seek, this discipline becomes ever more relevant. Information architecture involves many different aspects of web site creation and organization, but its principal tools are information organization techniques developed in other disciplines. Most of these techniques come from library science, such as thesauri, taxonomies, and faceted classification. Topic maps are a relative newcomer to this area and bring with them the promise of better-organized web sites, compared to what is possible with existing techniques. However, it is not generally understood how topic maps relate to the traditional techniques, and what advantages and disadvantages they have, compared to these techniques. The aim of this paper is to help build a better understanding of these issues.
  7. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.03
    0.028506393 = product of:
      0.11402557 = sum of:
        0.11402557 = sum of:
          0.061059505 = weight(_text_:design in 3355) [ClassicSimilarity], result of:
            0.061059505 = score(doc=3355,freq=4.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.3524878 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.052966066 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.052966066 = score(doc=3355,freq=4.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.25 = coord(1/4)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    LCSH
    Graph design
    Subject
    Graph design
  8. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.02
    0.024628041 = product of:
      0.098512165 = sum of:
        0.098512165 = sum of:
          0.061059505 = weight(_text_:design in 4820) [ClassicSimilarity], result of:
            0.061059505 = score(doc=4820,freq=4.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.3524878 = fieldWeight in 4820, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.03745266 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.03745266 = score(doc=4820,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.25 = coord(1/4)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  9. Priss, U.: Description logic and faceted knowledge representation (1999) 0.02
    0.024628041 = product of:
      0.098512165 = sum of:
        0.098512165 = sum of:
          0.061059505 = weight(_text_:design in 2655) [ClassicSimilarity], result of:
            0.061059505 = score(doc=2655,freq=4.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.3524878 = fieldWeight in 2655, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.03745266 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.03745266 = score(doc=2655,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.25 = coord(1/4)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  10. Seidlmayer, E.: ¬An ontology of digital objects in philosophy : an approach for practical use in research (2018) 0.02
    0.024343908 = product of:
      0.09737563 = sum of:
        0.09737563 = weight(_text_:sites in 5496) [ClassicSimilarity], result of:
          0.09737563 = score(doc=5496,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.40430441 = fieldWeight in 5496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5496)
      0.25 = coord(1/4)
    
    Abstract
    The digitalization of research enables new scientific insights and methods, especially in the humanities. Nonetheless, electronic book editions, encyclopedias, mobile applications or web sites presenting research projects are not in broad use in academic philosophy. This is contradictory to the large amount of helpful tools facilitating research also bearing new scientific subjects and approaches. A possible solution to this dilemma is the systematization and promotion of these tools in order to improve their accessibility and fully exploit the potential of digitalization for philosophy.
  11. Branch, F.; Arias, T.; Kennah, J.; Phillips, R.; Windleharth, T.; Lee, J.H.: Representing transmedia fictional worlds through ontology (2017) 0.02
    0.017388504 = product of:
      0.069554016 = sum of:
        0.069554016 = weight(_text_:sites in 3958) [ClassicSimilarity], result of:
          0.069554016 = score(doc=3958,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.28878886 = fieldWeight in 3958, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3958)
      0.25 = coord(1/4)
    
    Abstract
    Currently, there is no structured data standard for representing elements commonly found in transmedia fictional worlds. Although there are websites dedicated to individual universes, the information found on these sites separate out the various formats, concentrate on only the bibliographic aspects of the material, and are only searchable with full text. We have created an ontological model that will allow various user groups interested in transmedia to search for and retrieve the information contained in these worlds based upon their structure. We conducted a domain analysis and user studies based on the contents of Harry Potter, Lord of the Rings, the Marvel Universe, and Star Wars in order to build a new model using Ontology Web Language (OWL) and an artificial intelligence-reasoning engine. This model can infer connections between transmedia properties such as characters, elements of power, items, places, events, and so on. This model will facilitate better search and retrieval of the information contained within these vast story universes for all users interested in them. The result of this project is an OWL ontology reflecting real user needs based upon user research, which is intuitive for users and can be used by artificial intelligence systems.
  12. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.02
    0.016797554 = product of:
      0.067190215 = sum of:
        0.067190215 = sum of:
          0.03597966 = weight(_text_:design in 2589) [ClassicSimilarity], result of:
            0.03597966 = score(doc=2589,freq=2.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.20770542 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
          0.031210553 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
            0.031210553 = score(doc=2589,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.19345059 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
      0.25 = coord(1/4)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
  13. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.02
    0.016797554 = product of:
      0.067190215 = sum of:
        0.067190215 = sum of:
          0.03597966 = weight(_text_:design in 106) [ClassicSimilarity], result of:
            0.03597966 = score(doc=106,freq=2.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.20770542 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
          0.031210553 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
            0.031210553 = score(doc=106,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.19345059 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  14. Lange, C.: Ontologies and languages for representing mathematical knowledge on the Semantic Web (2011) 0.01
    0.013910804 = product of:
      0.055643216 = sum of:
        0.055643216 = weight(_text_:sites in 135) [ClassicSimilarity], result of:
          0.055643216 = score(doc=135,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.23103109 = fieldWeight in 135, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.03125 = fieldNorm(doc=135)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http://www.semantic-web-journal.net/content/ontologies-and-languages-representing-mathematical-knowledge-semantic-web http://www.semantic-web-journal.net/sites/default/files/swj122_2.pdf.
  15. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.01
    0.013438042 = product of:
      0.05375217 = sum of:
        0.05375217 = sum of:
          0.028783726 = weight(_text_:design in 1436) [ClassicSimilarity], result of:
            0.028783726 = score(doc=1436,freq=2.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.16616434 = fieldWeight in 1436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
          0.024968442 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
            0.024968442 = score(doc=1436,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.15476047 = fieldWeight in 1436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
      0.25 = coord(1/4)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  16. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.01
    0.013438042 = product of:
      0.05375217 = sum of:
        0.05375217 = sum of:
          0.028783726 = weight(_text_:design in 1634) [ClassicSimilarity], result of:
            0.028783726 = score(doc=1634,freq=2.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.16616434 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.024968442 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.024968442 = score(doc=1634,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  17. Park, O.n.: Opening ontology design : a study of the implications of knowledge organization for ontology design (2008) 0.01
    0.01321977 = product of:
      0.05287908 = sum of:
        0.05287908 = product of:
          0.10575816 = sum of:
            0.10575816 = weight(_text_:design in 2489) [ClassicSimilarity], result of:
              0.10575816 = score(doc=2489,freq=12.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.61052674 = fieldWeight in 2489, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2489)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    It is proposed that sufficient research into ontology design has not been achieved and that this deficiency has led to the insufficiency of ontology in reinforcing its communications frameworks, knowledge sharing and re-use applications. In order to diagnose the problems of ontology research, I first survey the notion of ontology in the context of ontology design, based on a Means-Ends tool provided by a Cognitive Work Analysis. The potential contributions of knowledge organization in library and information sciences that can be used to improve the limitations of ontology research are demonstrated. I propose a context-centered view as an approach for ontology design, and present faceted classification as an appropriate method for structuring ontology. In addition, I also provides a case study of wine ontology in order to demonstrate how knowledge organization approaches in library and information science can improve ontology design.
  18. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.01
    0.0117582865 = product of:
      0.047033146 = sum of:
        0.047033146 = sum of:
          0.025185758 = weight(_text_:design in 1633) [ClassicSimilarity], result of:
            0.025185758 = score(doc=1633,freq=2.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.14539379 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
          0.021847386 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.021847386 = score(doc=1633,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  19. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.01
    0.009347789 = product of:
      0.037391156 = sum of:
        0.037391156 = product of:
          0.07478231 = sum of:
            0.07478231 = weight(_text_:design in 3708) [ClassicSimilarity], result of:
              0.07478231 = score(doc=3708,freq=6.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.43170762 = fieldWeight in 3708, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3708)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
  20. Quillian, M.R.: Word concepts : a theory and simulation of some basic semantic capabilities. (1967) 0.01
    0.009347789 = product of:
      0.037391156 = sum of:
        0.037391156 = product of:
          0.07478231 = sum of:
            0.07478231 = weight(_text_:design in 4414) [ClassicSimilarity], result of:
              0.07478231 = score(doc=4414,freq=6.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.43170762 = fieldWeight in 4414, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4414)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In order to discover design principles for a large memory that can enable it to serve as the base of knowledge underlying human-like language behavior, experiments with a model memory are being performed. This model is built up within a computer by "recoding" a body of information from an ordinary dictionary into a complex network of elements and associations interconnecting them. Then, the ability of a program to use the resulting model memory effectively for simulating human performance provides a test of its design. One simulation program, now running, is given the model memory and is required to compare and contrast the meanings of arbitrary pairs of English words. For each pair, the program locates any relevant semantic information within the model memory, draws inferences on the basis of this, and thereby discovers various relationships between the meanings of the two words. Finally, it creates English text to express its conclusions. The design principles embodied in the memory model, together with some of the methods used by the program, constitute a theory of how human memory for semantic and other conceptual material may be formatted, organized, and used.

Authors

Years

Languages

  • e 100
  • d 12
  • pt 1
  • More… Less…

Types

  • a 84
  • el 26
  • x 9
  • m 5
  • n 2
  • r 1
  • s 1
  • More… Less…