Search (132 results, page 1 of 7)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.15
    0.14787881 = product of:
      0.29575762 = sum of:
        0.073939405 = product of:
          0.22181821 = sum of:
            0.22181821 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.22181821 = score(doc=400,freq=2.0), product of:
                0.39468166 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046553567 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.22181821 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.22181821 = score(doc=400,freq=2.0), product of:
            0.39468166 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046553567 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.13
    0.12921259 = product of:
      0.25842518 = sum of:
        0.049292937 = product of:
          0.14787881 = sum of:
            0.14787881 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14787881 = score(doc=5820,freq=2.0), product of:
                0.39468166 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046553567 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.20913222 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.20913222 = score(doc=5820,freq=4.0), product of:
            0.39468166 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046553567 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.098585874 = product of:
      0.19717175 = sum of:
        0.049292937 = product of:
          0.14787881 = sum of:
            0.14787881 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14787881 = score(doc=701,freq=2.0), product of:
                0.39468166 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046553567 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14787881 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14787881 = score(doc=701,freq=2.0), product of:
            0.39468166 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046553567 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Soergel, D.: SemWeb: Proposal for an Open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology : exploration and development of the concept (1996) 0.06
    0.059937328 = product of:
      0.119874656 = sum of:
        0.09032987 = weight(_text_:open in 3576) [ClassicSimilarity], result of:
          0.09032987 = score(doc=3576,freq=6.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.43088073 = fieldWeight in 3576, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3576)
        0.029544784 = product of:
          0.059089568 = sum of:
            0.059089568 = weight(_text_:access in 3576) [ClassicSimilarity], result of:
              0.059089568 = score(doc=3576,freq=8.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.37448242 = fieldWeight in 3576, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM an on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed through a common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowledge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system woul be designed to be usable by many levels of users for improved information exchange.
    Content
    Expanded version of a paper published in Advances in Knowledge Organization v.5 (1996): 165-173 (4th Annual ISKO Conference, Washington, D.C., 1996 July 15-18): SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology.
  5. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.06
    0.057405654 = product of:
      0.11481131 = sum of:
        0.102196574 = weight(_text_:open in 179) [ClassicSimilarity], result of:
          0.102196574 = score(doc=179,freq=12.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.48748586 = fieldWeight in 179, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.012614732 = product of:
          0.025229463 = sum of:
            0.025229463 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.025229463 = score(doc=179,freq=2.0), product of:
                0.16302267 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046553567 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  6. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.05
    0.054336306 = product of:
      0.10867261 = sum of:
        0.08344315 = weight(_text_:open in 318) [ClassicSimilarity], result of:
          0.08344315 = score(doc=318,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.39803052 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.025229463 = product of:
          0.050458927 = sum of:
            0.050458927 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.050458927 = score(doc=318,freq=2.0), product of:
                0.16302267 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046553567 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 5.2021 12:43:05
    Source
    Open Password. 2021, Nr.925 vom 21.05.2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzI5NSwiZDdlZGY4MTk0NWJhIiwwLDAsMjY1LDFd]
  7. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.05
    0.053115852 = product of:
      0.106231704 = sum of:
        0.088504836 = weight(_text_:open in 2186) [ClassicSimilarity], result of:
          0.088504836 = score(doc=2186,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.42217514 = fieldWeight in 2186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.01772687 = product of:
          0.03545374 = sum of:
            0.03545374 = weight(_text_:access in 2186) [ClassicSimilarity], result of:
              0.03545374 = score(doc=2186,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.22468945 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2186)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
  8. Soergel, D.: SemWeb: proposal for an open, multifunctional, multilingual system for integrated access to knowledge about concepts and terminology (1996) 0.05
    0.04967028 = product of:
      0.09934056 = sum of:
        0.07375403 = weight(_text_:open in 3575) [ClassicSimilarity], result of:
          0.07375403 = score(doc=3575,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.3518126 = fieldWeight in 3575, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3575)
        0.025586532 = product of:
          0.051173065 = sum of:
            0.051173065 = weight(_text_:access in 3575) [ClassicSimilarity], result of:
              0.051173065 = score(doc=3575,freq=6.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.3243113 = fieldWeight in 3575, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3575)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Presents a proposal for the long-range development of an open, multifunctional, multilingual system for integrated access to many kinds of knowledge about concepts and terminology. The system would draw on existing knowledge bases that are accessible through the Internet or on CD-ROM and on a common integrated distributed knowledge base that would grow incrementally over time. Existing knowledge bases would be accessed througha common interface that would search several knowledge bases, collate the data into a common format, and present them to the user. The common integrated distributed knowldge base would provide an environment in which many contributors could carry out classification and terminological projects more efficiently, with the results available in a common format. Over time, data from other knowledge bases could be incorporated into the common knowledge base, either by actual transfer (provided the knowledge base producers are willing) or by reference through a link. Either way, such incorporation requires intellectual work but allows for tighter integration than common interface access to multiple knowledge bases. Each piece of information in the common knowledge base will have all its sources attached, providing an acknowledgment mechanism that gives due credit to all contributors. The whole system would be designed to be usable by many levels of users for improved information exchange.
  9. Auer, S.; Oelen, A.; Haris, A.M.; Stocker, M.; D'Souza, J.; Farfar, K.E.; Vogt, L.; Prinz, M.; Wiens, V.; Jaradeh, M.Y.: Improving access to scientific literature with knowledge graphs : an experiment using library guidelines to judge information integrity (2020) 0.05
    0.04732267 = product of:
      0.09464534 = sum of:
        0.07375403 = weight(_text_:open in 316) [ClassicSimilarity], result of:
          0.07375403 = score(doc=316,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.3518126 = fieldWeight in 316, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=316)
        0.020891316 = product of:
          0.041782632 = sum of:
            0.041782632 = weight(_text_:access in 316) [ClassicSimilarity], result of:
              0.041782632 = score(doc=316,freq=4.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.26479906 = fieldWeight in 316, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=316)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The transfer of knowledge has not changed fundamentally for many hundreds of years: It is usually document-based-formerly printed on paper as a classic essay and nowadays as PDF. With around 2.5 million new research contributions every year, researchers drown in a flood of pseudo-digitized PDF publications. As a result research is seriously weakened. In this article, we argue for representing scholarly contributions in a structured and semantic way as a knowledge graph. The advantage is that information represented in a knowledge graph is readable by machines and humans. As an example, we give an overview on the Open Research Knowledge Graph (ORKG), a service implementing this approach. For creating the knowledge graph representation, we rely on a mixture of manual (crowd/expert sourcing) and (semi-)automated techniques. Only with such a combination of human and machine intelligence, we can achieve the required quality of the representation to allow for novel exploration and assistance services for researchers. As a result, a scholarly knowledge graph such as the ORKG can be used to give a condensed overview on the state-of-the-art addressing a particular research quest, for example as a tabular comparison of contributions according to various characteristics of the approaches. Further possible intuitive access interfaces to such scholarly knowledge graphs include domain-specific (chart) visualizations or answering of natural language questions.
    Object
    Open Research Knowledge Graph
  10. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.04
    0.04426321 = product of:
      0.08852642 = sum of:
        0.07375403 = weight(_text_:open in 3934) [ClassicSimilarity], result of:
          0.07375403 = score(doc=3934,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.3518126 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.014772392 = product of:
          0.029544784 = sum of:
            0.029544784 = weight(_text_:access in 3934) [ClassicSimilarity], result of:
              0.029544784 = score(doc=3934,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.18724121 = fieldWeight in 3934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3934)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
  11. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.04
    0.04397554 = product of:
      0.08795108 = sum of:
        0.059003223 = weight(_text_:open in 758) [ClassicSimilarity], result of:
          0.059003223 = score(doc=758,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2814501 = fieldWeight in 758, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.028947856 = product of:
          0.057895713 = sum of:
            0.057895713 = weight(_text_:access in 758) [ClassicSimilarity], result of:
              0.057895713 = score(doc=758,freq=12.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.36691633 = fieldWeight in 758, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.03125 = fieldNorm(doc=758)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.
  12. Soergel, D.: Towards a relation ontology for the Semantic Web (2011) 0.04
    0.040154617 = product of:
      0.080309235 = sum of:
        0.062582366 = weight(_text_:open in 4342) [ClassicSimilarity], result of:
          0.062582366 = score(doc=4342,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2985229 = fieldWeight in 4342, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=4342)
        0.01772687 = product of:
          0.03545374 = sum of:
            0.03545374 = weight(_text_:access in 4342) [ClassicSimilarity], result of:
              0.03545374 = score(doc=4342,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.22468945 = fieldWeight in 4342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4342)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Semantic Web consists of data structured for use by computer programs, such as data sets made available under the Linked Open Data initiative. Much of this data is structured following the entity-relationship model encoded in RDF for syntactic interoperability. For semantic interoperability, the semantics of the relationships used in any given dataset needs to be made explicit. Ultimately this requires an inventory of these relationships structured around a relation ontology. This talk will outline a blueprint for such an inventory, including a format for the description/definition of binary and n-ary relations, drawing on ideas put forth in the classification and thesaurus community over the last 60 years, upper level ontologies, systems like FrameNet, the Buffalo Relation Ontology, and an analysis of linked data sets.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  13. Bardhan, S.; Dutta, B.: ONCO: an ontology model for MOOC platforms (2022) 0.04
    0.040154617 = product of:
      0.080309235 = sum of:
        0.062582366 = weight(_text_:open in 1111) [ClassicSimilarity], result of:
          0.062582366 = score(doc=1111,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2985229 = fieldWeight in 1111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=1111)
        0.01772687 = product of:
          0.03545374 = sum of:
            0.03545374 = weight(_text_:access in 1111) [ClassicSimilarity], result of:
              0.03545374 = score(doc=1111,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.22468945 = fieldWeight in 1111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1111)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In the process of searching for a particular course on e-learning platforms, it is required to browse through different platforms, and it becomes a time-consuming process. To resolve the issue, an ontology has been developed that can provide single-point access to all the e-learning platforms. The modelled ONline Course Ontology (ONCO) is based on YAMO, METHONTOLOGY and IDEF5 and built on the Protégé ontology editing tool. ONCO is integrated with sample data and later evaluated using pre-defined competency questions. Complex SPARQL queries are executed to identify the effectiveness of the constructed ontology. The modelled ontology is able to retrieve all the sampled queries. The ONCO has been developed for the efficient retrieval of similar courses from massive open online course (MOOC) platforms.
  14. Stuckenschmidt, H.; Harmelen, F van; Waard, A. de; Scerri, T.; Bhogal, R.; Buel, J. van; Crowlesmith, I.; Fluit, C.; Kampman, A.; Broekstra, J.; Mulligen, E. van: Exploring large document repositories with RDF technology : the DOPE project (2004) 0.03
    0.034073617 = product of:
      0.068147235 = sum of:
        0.041721575 = weight(_text_:open in 762) [ClassicSimilarity], result of:
          0.041721575 = score(doc=762,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.19901526 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
        0.026425658 = product of:
          0.052851316 = sum of:
            0.052851316 = weight(_text_:access in 762) [ClassicSimilarity], result of:
              0.052851316 = score(doc=762,freq=10.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.33494726 = fieldWeight in 762, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.03125 = fieldNorm(doc=762)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This thesaurus-based search system uses automatic indexing, RDF-based querying, and concept-based visualization of results to support exploration of large online document repositories. Innovative research institutes rely on the availability of complete and accurate information about new research and development. Information providers such as Elsevier make it their business to provide the required information in a cost-effective way. The Semantic Web will likely contribute significantly to this effort because it facilitates access to an unprecedented quantity of data. The DOPE project (Drug Ontology Project for Elsevier) explores ways to provide access to multiple lifescience information sources through a single interface. With the unremitting growth of scientific information, integrating access to all this information remains an important problem, primarily because the information sources involved are so heterogeneous. Sources might use different syntactic standards (syntactic heterogeneity), organize information in different ways (structural heterogeneity), and even use different terminologies to refer to the same information (semantic heterogeneity). Integrated access hinges on the ability to address these different kinds of heterogeneity. Also, mental models and keywords for accessing data generally diverge between subject areas and communities; hence, many different ontologies have emerged. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. To serve this need, we've developed a thesaurus-based search system that uses automatic indexing, RDF-based querying, and concept-based visualization. We describe here the conversion of an existing proprietary thesaurus to an open standard format, a generic architecture for thesaurus-based information access, an innovative user interface, and results of initial user studies with the resulting DOPE system.
  15. Calegari, S.; Pasi, G.: Personal ontologies : generation of user profiles based on the YAGO ontology (2013) 0.03
    0.03346218 = product of:
      0.06692436 = sum of:
        0.05215197 = weight(_text_:open in 2719) [ClassicSimilarity], result of:
          0.05215197 = score(doc=2719,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.24876907 = fieldWeight in 2719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2719)
        0.014772392 = product of:
          0.029544784 = sum of:
            0.029544784 = weight(_text_:access in 2719) [ClassicSimilarity], result of:
              0.029544784 = score(doc=2719,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.18724121 = fieldWeight in 2719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2719)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Personalized search is aimed at tailoring the search outcome to users; to this aim user profiles play an important role: the more faithfully a user profile represents the user interests and preferences, the higher is the probability to improve the search process. In the approaches proposed in the literature, user profiles are formally represented as bags of words, as vectors, or as conceptual taxonomies, generally defined based on external knowledge resources (such as the WordNet and the ODP - Open Directory Project). Ontologies have been more recently considered as a powerful expressive means for knowledge representation. The advantage offered by ontological languages is that they allow a more structured and expressive knowledge representation with respect to the above mentioned approaches. A challenging research activity consists in defining user profiles by a knowledge extraction process from an existing ontology, with the main aim of producing a semantically rich representation of the user interests. In this paper a method to automatically define a personal ontology via a knowledge extraction process from the general purpose ontology YAGO is presented; starting from a set of keywords, which are representatives of the user interests, the process is aimed to define a structured and semantically coherent representation of the user topical interests. In the paper the proposed method is described, as well as some evaluations that show its effectiveness.
    Footnote
    Beitrag in einem Themenschwerpunkt "Personalization and recommendation in information access".
  16. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.03
    0.03346218 = product of:
      0.06692436 = sum of:
        0.05215197 = weight(_text_:open in 3936) [ClassicSimilarity], result of:
          0.05215197 = score(doc=3936,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.24876907 = fieldWeight in 3936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
        0.014772392 = product of:
          0.029544784 = sum of:
            0.029544784 = weight(_text_:access in 3936) [ClassicSimilarity], result of:
              0.029544784 = score(doc=3936,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.18724121 = fieldWeight in 3936, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3936)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
  17. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.03
    0.025813911 = product of:
      0.103255644 = sum of:
        0.103255644 = weight(_text_:open in 604) [ClassicSimilarity], result of:
          0.103255644 = score(doc=604,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.49253768 = fieldWeight in 604, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  18. Auer, S.; Sens, I.; Stocker, M.: Erschließung wissenschaftlicher Literatur mit dem Open Research Knowledge Graph (2020) 0.02
    0.022126209 = product of:
      0.088504836 = sum of:
        0.088504836 = weight(_text_:open in 551) [ClassicSimilarity], result of:
          0.088504836 = score(doc=551,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.42217514 = fieldWeight in 551, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=551)
      0.25 = coord(1/4)
    
    Aid
    Open Research Knowledge Graph
  19. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.02
    0.021995839 = product of:
      0.087983355 = sum of:
        0.087983355 = sum of:
          0.05013916 = weight(_text_:access in 987) [ClassicSimilarity], result of:
            0.05013916 = score(doc=987,freq=4.0), product of:
              0.15778996 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046553567 = queryNorm
              0.31775886 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.037844196 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.037844196 = score(doc=987,freq=2.0), product of:
              0.16302267 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046553567 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.25 = coord(1/4)
    
    Date
    23. 7.2017 13:49:22
    LCSH
    World Wide Web / Subject access
    Subject
    World Wide Web / Subject access
  20. Hoppe, T.: Semantische Filterung : ein Werkzeug zur Steigerung der Effizienz im Wissensmanagement (2013) 0.02
    0.020860787 = product of:
      0.08344315 = sum of:
        0.08344315 = weight(_text_:open in 2245) [ClassicSimilarity], result of:
          0.08344315 = score(doc=2245,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.39803052 = fieldWeight in 2245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0625 = fieldNorm(doc=2245)
      0.25 = coord(1/4)
    
    Source
    Open journal of knowledge management. 2013, Ausgabe VII = http://www.community-of-knowledge.de/beitrag/semantische-filterung-ein-werkzeug-zur-steigerung-der-effizienz-im-wissensmanagement/

Authors

Years

Languages

  • e 111
  • d 20
  • f 1
  • More… Less…

Types

  • a 98
  • el 36
  • x 8
  • m 7
  • s 3
  • n 2
  • r 2
  • More… Less…

Subjects