Search (192 results, page 1 of 10)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.20
    0.19695961 = product of:
      0.45957243 = sum of:
        0.065653205 = product of:
          0.1969596 = sum of:
            0.1969596 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.1969596 = score(doc=400,freq=2.0), product of:
                0.35045066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041336425 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.1969596 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1969596 = score(doc=400,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.1969596 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1969596 = score(doc=400,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.42857143 = coord(3/7)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.18
    0.18338586 = product of:
      0.32092524 = sum of:
        0.0437688 = product of:
          0.1313064 = sum of:
            0.1313064 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1313064 = score(doc=701,freq=2.0), product of:
                0.35045066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041336425 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.1313064 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.1313064 = score(doc=701,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.1313064 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.1313064 = score(doc=701,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.014543652 = product of:
          0.029087303 = sum of:
            0.029087303 = weight(_text_:studies in 701) [ClassicSimilarity], result of:
              0.029087303 = score(doc=701,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.17634688 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.18
    0.17792545 = product of:
      0.41515937 = sum of:
        0.0437688 = product of:
          0.1313064 = sum of:
            0.1313064 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.1313064 = score(doc=5820,freq=2.0), product of:
                0.35045066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041336425 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.18569529 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.18569529 = score(doc=5820,freq=4.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.18569529 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.18569529 = score(doc=5820,freq=4.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.42857143 = coord(3/7)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  4. Reimer, U.; Brockhausen, P.; Lau, T.; Reich, J.R.: Ontology-based knowledge management at work : the Swiss life case studies (2004) 0.08
    0.08210786 = product of:
      0.19158499 = sum of:
        0.050838318 = weight(_text_:management in 4411) [ClassicSimilarity], result of:
          0.050838318 = score(doc=4411,freq=12.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.3648795 = fieldWeight in 4411, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4411)
        0.11165937 = weight(_text_:case in 4411) [ClassicSimilarity], result of:
          0.11165937 = score(doc=4411,freq=20.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.6144176 = fieldWeight in 4411, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=4411)
        0.029087303 = product of:
          0.058174606 = sum of:
            0.058174606 = weight(_text_:studies in 4411) [ClassicSimilarity], result of:
              0.058174606 = score(doc=4411,freq=8.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.35269377 = fieldWeight in 4411, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4411)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    This chapter describes two case studies conducted by the Swiss Life insurance group with the objective of proving the practical applicability and superiority of ontology-based knowledge management over classical approaches based on text retrieval technologies. The first case study in the domain of skills management uses manually constructed ontologies about skills, job functions and education. The purpose of the system is to give support for finding employees with certain skills. The ontologies are used to ensure that the user description of skills and the machine-held index of skills and people use the same vocabulary. The use of a shared vocabulary increases the performance of such a system significantly. The second case study aims at improving content-oriented access to passages of a 1000 page document about the International Accounting Standard on the corporate intranet. To this end, an ontology was automatically extracted from the document. It can be used to reformulate queries that turned out not to deliver the intended results. Since the ontology was automatically built, it is of a rather simple structure, consisting of weighted semantic associations between the relevant concepts in the document. We therefore call it a 'lightweight ontology'. The two case studies cover quite different aspects of using ontologies in knowledge management applications. Whereas in the second case study an ontology was automatically derived from a search space to improve information retrieval, in the first skills management case study the ontology itself introduces a structured search space. In one case study we gathered experience in building an ontology manually, while the challenge of the other case study was automatic ontology creation. A number of the novel Semantic Web-based tools described elsewhere in this book were used to build the two systems and both case studies described have led to projects to deploy live systems within Swiss Life.
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
  5. Qin, J.: ¬A relation typology in knowledge organization systems : case studies in the research data management domain (2018) 0.06
    0.06052123 = product of:
      0.1412162 = sum of:
        0.04150931 = weight(_text_:management in 4773) [ClassicSimilarity], result of:
          0.04150931 = score(doc=4773,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 4773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=4773)
        0.07061958 = weight(_text_:case in 4773) [ClassicSimilarity], result of:
          0.07061958 = score(doc=4773,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.3885918 = fieldWeight in 4773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0625 = fieldNorm(doc=4773)
        0.029087303 = product of:
          0.058174606 = sum of:
            0.058174606 = weight(_text_:studies in 4773) [ClassicSimilarity], result of:
              0.058174606 = score(doc=4773,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.35269377 = fieldWeight in 4773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4773)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
  6. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.05
    0.050449103 = product of:
      0.11771458 = sum of:
        0.036320645 = weight(_text_:management in 3694) [ClassicSimilarity], result of:
          0.036320645 = score(doc=3694,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2606825 = fieldWeight in 3694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3694)
        0.061792135 = weight(_text_:case in 3694) [ClassicSimilarity], result of:
          0.061792135 = score(doc=3694,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34001783 = fieldWeight in 3694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3694)
        0.019601801 = product of:
          0.039203603 = sum of:
            0.039203603 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.039203603 = score(doc=3694,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
  7. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.05
    0.04898082 = product of:
      0.11428858 = sum of:
        0.06777776 = weight(_text_:europe in 1436) [ClassicSimilarity], result of:
          0.06777776 = score(doc=1436,freq=2.0), product of:
            0.25178367 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.041336425 = queryNorm
            0.26919046 = fieldWeight in 1436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.03530979 = weight(_text_:case in 1436) [ClassicSimilarity], result of:
          0.03530979 = score(doc=1436,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.1942959 = fieldWeight in 1436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.01120103 = product of:
          0.02240206 = sum of:
            0.02240206 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
              0.02240206 = score(doc=1436,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.15476047 = fieldWeight in 1436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1436)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  8. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.04
    0.035096258 = product of:
      0.08189127 = sum of:
        0.020754656 = weight(_text_:management in 179) [ClassicSimilarity], result of:
          0.020754656 = score(doc=179,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.14896142 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.049935583 = weight(_text_:case in 179) [ClassicSimilarity], result of:
          0.049935583 = score(doc=179,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.2747759 = fieldWeight in 179, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.01120103 = product of:
          0.02240206 = sum of:
            0.02240206 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.02240206 = score(doc=179,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.4, S.671-685
  9. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.03
    0.0309997 = product of:
      0.108498946 = sum of:
        0.04413724 = weight(_text_:case in 4607) [ClassicSimilarity], result of:
          0.04413724 = score(doc=4607,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.06436171 = sum of:
          0.03635913 = weight(_text_:studies in 4607) [ClassicSimilarity], result of:
            0.03635913 = score(doc=4607,freq=2.0), product of:
              0.16494368 = queryWeight, product of:
                3.9902744 = idf(docFreq=2222, maxDocs=44218)
                0.041336425 = queryNorm
              0.22043361 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9902744 = idf(docFreq=2222, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
          0.028002575 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
            0.028002575 = score(doc=4607,freq=2.0), product of:
              0.14475311 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041336425 = queryNorm
              0.19345059 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
      0.2857143 = coord(2/7)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  10. Castellanos Ardila, J.P.: Investigation of an OSLC-domain targeting ISO 26262 : focus on the left side of the software V-model (2016) 0.03
    0.030944746 = product of:
      0.10830661 = sum of:
        0.029351516 = weight(_text_:management in 5819) [ClassicSimilarity], result of:
          0.029351516 = score(doc=5819,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.21066327 = fieldWeight in 5819, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=5819)
        0.07895509 = weight(_text_:case in 5819) [ClassicSimilarity], result of:
          0.07895509 = score(doc=5819,freq=10.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.43445885 = fieldWeight in 5819, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=5819)
      0.2857143 = coord(2/7)
    
    Abstract
    Industries have adopted a standardized set of practices for developing their products. In the automotive domain, the provision of safety-compliant systems is guided by ISO 26262, a standard that specifies a set of requirements and recommendations for developing automotive safety-critical systems. For being in compliance with ISO 26262, the safety lifecycle proposed by the standard must be included in the development process of a vehicle. Besides, a safety case that shows that the system is acceptably safe has to be provided. The provision of a safety case implies the execution of a precise documentation process. This process makes sure that the work products are available and traceable. Further, the documentation management is defined in the standard as a mandatory activity and guidelines are proposed/imposed for its elaboration. It would be appropriate to point out that a well-documented safety lifecycle will provide the necessary inputs for the generation of an ISO 26262-compliant safety case. The OSLC (Open Services for Lifecycle Collaboration) standard and the maturing stack of semantic web technologies represent a promising integration platform for enabling semantic interoperability between the tools involved in the safety lifecycle. Tools for requirements, architecture, development management, among others, are expected to interact and shared data with the help of domains specifications created in OSLC. This thesis proposes the creation of an OSLC tool-chain infrastructure for sharing safety-related information, where fragments of safety information can be generated. The steps carried out during the elaboration of this master thesis consist in the identification, representation, and shaping of the RDF resources needed for the creation of a safety case. The focus of the thesis is limited to a tiny portion of the ISO 26262 left-hand side of the V-model, more exactly part 6 clause 8 of the standard: Software unit design and implementation. Regardless of the use of a restricted portion of the standard during the execution of this thesis, the findings can be extended to other parts, and the conclusions can be generalize. This master thesis is considered one of the first steps towards the provision of an OSLC-based and ISO 26262-compliant methodological approach for representing and shaping the work products resulting from the execution of the safety lifecycle, documentation required in the conformation of an ISO-compliant safety case.
  11. Iosif, V.; Mika, P.; Larsson, R.; Akkermans, H.: Field experimenting with Semantic Web tools in a virtual organization (2004) 0.03
    0.027711991 = product of:
      0.09699196 = sum of:
        0.044027276 = weight(_text_:management in 4412) [ClassicSimilarity], result of:
          0.044027276 = score(doc=4412,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.31599492 = fieldWeight in 4412, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4412)
        0.052964687 = weight(_text_:case in 4412) [ClassicSimilarity], result of:
          0.052964687 = score(doc=4412,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 4412, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4412)
      0.2857143 = coord(2/7)
    
    Abstract
    How do we test Semantic Web tools? How can we know that they perform better than current technologies for knowledge management? What does 'better' precisely mean? How can we operationalize and measure this? Some of these questions may be partially answered by simulations in lab experiments that for example look at the speed or scalability of algorithms. However, it is not clear in advance to what extent such laboratory results carry over to the real world. Quality is in the eye of the beholder, and so the quality of Semantic Web methods will very much depend on the perception of their usefulness as seen by tool users. This can only be tested by carefully designed field experiments. In this chapter, we discuss the design considerations and set-up of field experiments with Semantic Web tools, and illustrate these with case examples from a virtual organization in industrial research.
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
  12. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.03
    0.02763396 = product of:
      0.096718855 = sum of:
        0.07490338 = weight(_text_:case in 4645) [ClassicSimilarity], result of:
          0.07490338 = score(doc=4645,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.41216385 = fieldWeight in 4645, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
        0.021815477 = product of:
          0.043630954 = sum of:
            0.043630954 = weight(_text_:studies in 4645) [ClassicSimilarity], result of:
              0.043630954 = score(doc=4645,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.26452032 = fieldWeight in 4645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4645)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  13. Naskar, D.; Das, S.: HNS ontology using faceted approach (2019) 0.03
    0.025449255 = product of:
      0.08907239 = sum of:
        0.04493515 = weight(_text_:management in 5267) [ClassicSimilarity], result of:
          0.04493515 = score(doc=5267,freq=6.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.32251096 = fieldWeight in 5267, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5267)
        0.04413724 = weight(_text_:case in 5267) [ClassicSimilarity], result of:
          0.04413724 = score(doc=5267,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 5267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5267)
      0.2857143 = coord(2/7)
    
    Abstract
    The purpose of this research is to develop an ontology with subsequent testing and evaluation, for identifying utility and value. The domain that has been chosen is human nervous system (HNS) disorders. It is hypothesized here that an ontology-based patient records management system is more effective in meeting and addressing complex information needs of health-care personnel. Therefore, this study has been based on the premise that developing an ontology and using it as a component of the search interface in hospital records management systems will lead to more efficient and effective management of health-care.It is proposed here to develop an ontology of the domain of HNS disorders using a standard vocabulary such as MeSH or SNOMED CT. The principal classes of an ontology include facet analysis for arranging concepts based on their common characteristics to build mutually exclusive classes. We combine faceted theory with description logic, which helps us to better query and retrieve data by implementing an ontological model. Protégé 5.2.0 was used as ontology editor. The use of ontologies for domain modelling will be of acute help to doctors for searching patient records. In this paper we show how the faceted approach helps us to build a flexible model and retrieve better information. We use the medical domain as a case study to show examples and implementation.
  14. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.02
    0.024027621 = product of:
      0.08409667 = sum of:
        0.031131983 = weight(_text_:management in 3708) [ClassicSimilarity], result of:
          0.031131983 = score(doc=3708,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 3708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
        0.052964687 = weight(_text_:case in 3708) [ClassicSimilarity], result of:
          0.052964687 = score(doc=3708,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 3708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
      0.2857143 = coord(2/7)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
  15. Madalli, D.P.; Balaji, B.P.; Sarangi, A.K.: Music domain analysis for building faceted ontological representation (2014) 0.02
    0.023255412 = product of:
      0.081393935 = sum of:
        0.061792135 = weight(_text_:case in 1437) [ClassicSimilarity], result of:
          0.061792135 = score(doc=1437,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34001783 = fieldWeight in 1437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1437)
        0.019601801 = product of:
          0.039203603 = sum of:
            0.039203603 = weight(_text_:22 in 1437) [ClassicSimilarity], result of:
              0.039203603 = score(doc=1437,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.2708308 = fieldWeight in 1437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1437)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper describes to construct faceted ontologies for domain modeling. Building upon the faceted theory of S.R. Ranganathan (1967), the paper intends to address the faceted classification approach applied to build domain ontologies. As classificatory ontologies are employed to represent the relationships of entities and objects on the web, the faceted approach helps to analyze domain representation in an effective way for modeling. Based on this perspective, an ontology of the music domain has been analyzed that would serve as a case study.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  16. King, B.E.; Reinold, K.: Finding the concept, not just the word : a librarian's guide to ontologies and semantics (2008) 0.02
    0.022695461 = product of:
      0.052956074 = sum of:
        0.015565991 = weight(_text_:management in 2863) [ClassicSimilarity], result of:
          0.015565991 = score(doc=2863,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.11172107 = fieldWeight in 2863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2863)
        0.026482344 = weight(_text_:case in 2863) [ClassicSimilarity], result of:
          0.026482344 = score(doc=2863,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.14572193 = fieldWeight in 2863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2863)
        0.0109077385 = product of:
          0.021815477 = sum of:
            0.021815477 = weight(_text_:studies in 2863) [ClassicSimilarity], result of:
              0.021815477 = score(doc=2863,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.13226016 = fieldWeight in 2863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2863)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Aimed at students and professionals within Library and Information Services (LIS), this book is about the power and potential of ontologies to enhance the electronic search process. The book will compare search strategies and results in the current search environment and demonstrate how these could be transformed using ontologies and concept searching. Simple descriptions, visual representations, and examples of ontologies will bring a full understanding of how these concept maps are constructed to enhance retrieval through natural language queries. Readers will gain a sense of how ontologies are currently being used and how they could be applied in the future, encouraging them to think about how their own work and their users' search experiences could be enhanced by the creation of a customized ontology. Key Features Written by a librarian, for librarians (most work on ontologies is written and read by people in computer science and knowledge management) Written by a librarian who has created her own ontology and performed research on its capabilities Written in easily understandable language, with concepts broken down to the basics The Author Ms. King is the Information Specialist at the Center on Media and Child Health at Children's Hospital Boston. She is a graduate of Smith College (B.A.) and Simmons College (M.L.I.S.). She is an active member of the Special Libraries Association, and was the recipient of the 2005 SLA Innovation in Technology Award for the creation of a customized media effects ontology used for semantic searching. Readership The book is aimed at practicing librarians and information professionals as well as graduate students of Library and Information Science. Contents Introduction Part 1: Understanding Ontologies - organising knowledge; what is an ontology? How are ontologies different from other knowledge representations? How are ontologies currently being used? Key concepts Ontologies in semantic search - determining whether a search was successful; what does semantic search have to offer? Semantic techniques; semantic searching behind the scenes; key concepts Creating an ontology - how to create an ontology; key concepts Building an ontology from existing components - choosing components; customizing your knowledge structure; key concepts Part 2: Semantic Technologies Natural language processing - tagging parts of speech; grammar-based NLP; statistical NLP; semantic analysis,; current applications of NLP; key concepts Using metadata to add semantic information - structured languages; metadata tagging; semantic tagging; key concepts Other semantic capabilities - semantic classification; synsets; topic maps; rules and inference; key concepts Part 3: Case Studies: Theory into Practice Biogen Idec: using semantics in drug discovery research - Biogen Idec's solution; the future The Center on Media and Child Health: using an ontology to explore the effects of media - building the ontology; choosing the source; implementing and comparing to Boolean search; the future Partners HealthCare System: semantic technologies to improve clinical decision support - the medical appointment; partners healthcare system's solution; lessons learned; the future MINDSWAP: using ontologies to aid terrorism; intelligence gathering - building, using and maintaining the ontology; sharing information with other experts; future plans Part 4: Advanced Topics Languages for expressing ontologies - XML; RDF; OWL; SKOS; Ontology language features - comparison chart Tools for building ontologies - basic criteria when evaluating ontologies Part 5: Transitions to the Future
  17. Schwarz, K.: Domain model enhanced search : a comparison of taxonomy, thesaurus and ontology (2005) 0.02
    0.021629145 = product of:
      0.075702004 = sum of:
        0.061158355 = weight(_text_:case in 4569) [ClassicSimilarity], result of:
          0.061158355 = score(doc=4569,freq=6.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.3365304 = fieldWeight in 4569, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=4569)
        0.014543652 = product of:
          0.029087303 = sum of:
            0.029087303 = weight(_text_:studies in 4569) [ClassicSimilarity], result of:
              0.029087303 = score(doc=4569,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.17634688 = fieldWeight in 4569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4569)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The results of this thesis are intended to support the information architect in designing a solution for improved search in a corporate environment. Specifically we have examined the type of search problems that require a domain model to enhance the search process. There are several approaches to modeling a domain. We have considered 3 different types of domain modeling schemes; taxonomy, thesaurus and ontology. The intention is to support the information architect in making an informed choice between one or more of these schemes. In our opinion the main criteria for this choice are the modeling characteristics of a scheme and the suitability for application in the search process. The second chapter is a discussion of modeling characteristics of each scheme, followed by a comparison between them. This should give an information architect an idea of which aspects of a domain can be modeled with each scheme. What is missing here is an indication of the effort required to model a domain with each scheme. There are too many factors that influence the amount of required effort, ranging from measurable factors like domain size and resource characteristics to cultural matters such as the willingness to share knowledge and the existence of a project champion in the team to keep the project running. The third chapter shows what role domain models can play in each part of the search process. This gives an idea of the problems that domain models can solve. We have split the search process into individual parts to show that domain models can be applied very differently in the process. The fourth chapter makes recommendations about the suitability of each individualdomain modeling scheme for improving search. Each scheme has particular characteristics that make it especially suitable for a domain or a search problem. In the appendix each case study is described in detail. These descriptions are intended to serve as a benchmark. The current problem of the enterprise can be compared to those described to see which case study is most similar, which solution was chosen, which problems arose and how they were dealt with. An important issue that we have not touched upon in this thesis is that of maintenance. The real problems of a domain model are revealed when it is applied in a search system and its deficits and wrong assumptions become clear. Adaptation and maintenance are always required. Unfortunately we have not been able to glean sufficient information about maintenance issues from our case studies to draw any meaningful conclusions.
  18. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.02
    0.021365764 = product of:
      0.074780166 = sum of:
        0.052964687 = weight(_text_:case in 4640) [ClassicSimilarity], result of:
          0.052964687 = score(doc=4640,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.021815477 = product of:
          0.043630954 = sum of:
            0.043630954 = weight(_text_:studies in 4640) [ClassicSimilarity], result of:
              0.043630954 = score(doc=4640,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.26452032 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
  19. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.02
    0.021365764 = product of:
      0.074780166 = sum of:
        0.052964687 = weight(_text_:case in 4642) [ClassicSimilarity], result of:
          0.052964687 = score(doc=4642,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.021815477 = product of:
          0.043630954 = sum of:
            0.043630954 = weight(_text_:studies in 4642) [ClassicSimilarity], result of:
              0.043630954 = score(doc=4642,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.26452032 = fieldWeight in 4642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4642)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
  20. Sure, Y.; Erdmann, M.; Studer, R.: OntoEdit: collaborative engineering of ontologies (2004) 0.02
    0.020359404 = product of:
      0.07125791 = sum of:
        0.03594812 = weight(_text_:management in 4405) [ClassicSimilarity], result of:
          0.03594812 = score(doc=4405,freq=6.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.25800878 = fieldWeight in 4405, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4405)
        0.03530979 = weight(_text_:case in 4405) [ClassicSimilarity], result of:
          0.03530979 = score(doc=4405,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.1942959 = fieldWeight in 4405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=4405)
      0.2857143 = coord(2/7)
    
    Abstract
    Developing ontologies is central to our vision of Semantic Web-based knowledge management. The methodology described in Chapter 3 guides the development of ontologies for different applications. However, because of the size of ontologies, their complexity, their formal underpinnings and the necessity to come towards a shared understanding within a group of people when defining an ontology, ontology construction is still far from being a well-understood process. Concerning the methodology, OntoEdit focuses on three of the main steps for ontology development (the methodology is described in Chapter 3), viz. the kick off, refinement, and evaluation. We describe the steps supported by OntoEdit and focus on collaborative aspects that occur during each of the step. First, all requirements of the envisaged ontology are collected during the kick off phase. Typically for ontology engineering, ontology engineers and domain experts are joined in a team that works together on a description of the domain and the goal of the ontology, design guidelines, available knowledge sources (e.g. re-usable ontologies and thesauri, etc.), potential users and use cases and applications supported by the ontology. The output of this phase is a semiformal description of the ontology. Second, during the refinement phase, the team extends the semi-formal description in several iterations and formalizes it in an appropriate representation language like RDF(S) or, more advanced, DAML1OIL. The output of this phase is a mature ontology (the 'target ontology'). Third, the target ontology needs to be evaluated according to the requirement specifications. Typically this phase serves as a proof for the usefulness of ontologies (and ontology-based applications) and may involve the engineering team as well as end users of the targeted application. The output of this phase is an evaluated ontology, ready for roll-out into a productive environment. Support for these collaborative development steps within the ontology development methodology is crucial in order to meet the conflicting needs for ease of use and construction of complex ontology structures. We now illustrate OntoEdit's support for each of the supported steps. The examples shown are taken from the Swiss Life case study on skills management (cf. Chapter 12).
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a

Authors

Years

Languages

  • e 166
  • d 20
  • pt 3
  • sp 1
  • More… Less…

Types

  • a 142
  • el 37
  • m 17
  • x 11
  • s 7
  • n 4
  • r 1
  • More… Less…

Subjects

Classifications