Search (161 results, page 1 of 9)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.17
    0.16521555 = product of:
      0.4405748 = sum of:
        0.06293926 = product of:
          0.18881777 = sum of:
            0.18881777 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18881777 = score(doc=400,freq=2.0), product of:
                0.3359639 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962768 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18881777 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18881777 = score(doc=400,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.18881777 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18881777 = score(doc=400,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.375 = coord(3/8)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.16
    0.16080073 = product of:
      0.32160145 = sum of:
        0.041959506 = product of:
          0.12587851 = sum of:
            0.12587851 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12587851 = score(doc=701,freq=2.0), product of:
                0.3359639 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962768 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12587851 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12587851 = score(doc=701,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.027884906 = weight(_text_:studies in 701) [ClassicSimilarity], result of:
          0.027884906 = score(doc=701,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.17634688 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12587851 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12587851 = score(doc=701,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(4/8)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.15
    0.14924915 = product of:
      0.39799774 = sum of:
        0.041959506 = product of:
          0.12587851 = sum of:
            0.12587851 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12587851 = score(doc=5820,freq=2.0), product of:
                0.3359639 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962768 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.1780191 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.1780191 = score(doc=5820,freq=4.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.1780191 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.1780191 = score(doc=5820,freq=4.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.375 = coord(3/8)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  4. Nevzorova, O.; Nevzorov, V.; Kirillovich, A.: ¬A syntactic method of extracting terms from special texts for replenishing domain ontologies (2017) 0.09
    0.091628835 = product of:
      0.24434356 = sum of:
        0.04182736 = weight(_text_:studies in 4098) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4098,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4098)
        0.17062776 = weight(_text_:pacific in 4098) [ClassicSimilarity], result of:
          0.17062776 = score(doc=4098,freq=2.0), product of:
            0.3193714 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.03962768 = queryNorm
            0.5342612 = fieldWeight in 4098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.046875 = fieldNorm(doc=4098)
        0.031888437 = product of:
          0.06377687 = sum of:
            0.06377687 = weight(_text_:area in 4098) [ClassicSimilarity], result of:
              0.06377687 = score(doc=4098,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32663327 = fieldWeight in 4098, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4098)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Natural Language Processing (NLP) is one of the principal areas of artificial intelligence. It can be argued that the use of ontologies increases the efficiency of natural language processing. However, most ontologies are built manually and require a lot of work. Thus, the problem of automated ontology replenishment is very relevant. One approach is to develop methods for replenishing ontologies using NLP for specific texts of a certain area. We applied the developed method of replenishing the OntoMathPro mathematical ontology, by extracting new terminology from mathematical documents. We developed a method for processing complex syntactic structures (structures with coordination reduction). The method includes certain rule schemata, conditions under which they are to be applied, and conditions determining the sequence of subtrees for which they are to be performed. In our studies, we investigated typical coordination models for mathematical works and performed experiments with a big mathematical collection.
    Source
    Second Russia and Pacific Conference on Computer Technology and Applications (RPC) (2017)
  5. Reimer, U.; Brockhausen, P.; Lau, T.; Reich, J.R.: Ontology-based knowledge management at work : the Swiss life case studies (2004) 0.04
    0.040703367 = product of:
      0.16281347 = sum of:
        0.107043654 = weight(_text_:case in 4411) [ClassicSimilarity], result of:
          0.107043654 = score(doc=4411,freq=20.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.6144176 = fieldWeight in 4411, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=4411)
        0.055769812 = weight(_text_:studies in 4411) [ClassicSimilarity], result of:
          0.055769812 = score(doc=4411,freq=8.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.35269377 = fieldWeight in 4411, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=4411)
      0.25 = coord(2/8)
    
    Abstract
    This chapter describes two case studies conducted by the Swiss Life insurance group with the objective of proving the practical applicability and superiority of ontology-based knowledge management over classical approaches based on text retrieval technologies. The first case study in the domain of skills management uses manually constructed ontologies about skills, job functions and education. The purpose of the system is to give support for finding employees with certain skills. The ontologies are used to ensure that the user description of skills and the machine-held index of skills and people use the same vocabulary. The use of a shared vocabulary increases the performance of such a system significantly. The second case study aims at improving content-oriented access to passages of a 1000 page document about the International Accounting Standard on the corporate intranet. To this end, an ontology was automatically extracted from the document. It can be used to reformulate queries that turned out not to deliver the intended results. Since the ontology was automatically built, it is of a rather simple structure, consisting of weighted semantic associations between the relevant concepts in the document. We therefore call it a 'lightweight ontology'. The two case studies cover quite different aspects of using ontologies in knowledge management applications. Whereas in the second case study an ontology was automatically derived from a search space to improve information retrieval, in the first skills management case study the ontology itself introduces a structured search space. In one case study we gathered experience in building an ontology manually, while the challenge of the other case study was automatic ontology creation. A number of the novel Semantic Web-based tools described elsewhere in this book were used to build the two systems and both case studies described have led to projects to deploy live systems within Swiss Life.
  6. Giri, K.; Gokhale, P.: Developing a banking service ontology using Protégé, an open source software (2015) 0.04
    0.03890346 = product of:
      0.103742555 = sum of:
        0.042312715 = weight(_text_:case in 2793) [ClassicSimilarity], result of:
          0.042312715 = score(doc=2793,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 2793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
        0.034856133 = weight(_text_:studies in 2793) [ClassicSimilarity], result of:
          0.034856133 = score(doc=2793,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 2793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
        0.0265737 = product of:
          0.0531474 = sum of:
            0.0531474 = weight(_text_:area in 2793) [ClassicSimilarity], result of:
              0.0531474 = score(doc=2793,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.27219442 = fieldWeight in 2793, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2793)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Computers have transformed from single isolated devices to entry points into a worldwide network of information exchange. Consequently, support in the exchange of data, information, and knowledge is becoming the key issue in computer technology today. The increasing volume of data available on the Web makes information retrieval a tedious and difficult task. Researchers are now exploring the possibility of creating a semantic web, in which meaning is made explicit, allowing machines to process and integrate web resources intelligently. The vision of the semantic web introduces the next generation of the Web by establishing a layer of machine-understandable data. The success of the semantic web depends on the easy creation, integration and use of semantic data, which will depend on web ontology. The faceted approach towards analyzing and representing knowledge given by S R Ranganathan would be useful in this regard. Ontology development in different fields is one such area where this approach given by Ranganathan could be applied. This paper presents a case of developing ontology for the field of banking.
    Source
    Annals of library and information studies. 62(2015) no.4, S.281-285
  7. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.04
    0.03571178 = product of:
      0.095231414 = sum of:
        0.02834915 = weight(_text_:libraries in 2418) [ClassicSimilarity], result of:
          0.02834915 = score(doc=2418,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.05077526 = weight(_text_:case in 2418) [ClassicSimilarity], result of:
          0.05077526 = score(doc=2418,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.03221402 = score(doc=2418,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  8. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.03
    0.033971757 = product of:
      0.090591356 = sum of:
        0.042312715 = weight(_text_:case in 4607) [ClassicSimilarity], result of:
          0.042312715 = score(doc=4607,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.034856133 = weight(_text_:studies in 4607) [ClassicSimilarity], result of:
          0.034856133 = score(doc=4607,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.013422508 = product of:
          0.026845016 = sum of:
            0.026845016 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.026845016 = score(doc=4607,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  9. Qin, J.: ¬A relation typology in knowledge organization systems : case studies in the research data management domain (2018) 0.03
    0.03086754 = product of:
      0.12347016 = sum of:
        0.06770035 = weight(_text_:case in 4773) [ClassicSimilarity], result of:
          0.06770035 = score(doc=4773,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.3885918 = fieldWeight in 4773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0625 = fieldNorm(doc=4773)
        0.055769812 = weight(_text_:studies in 4773) [ClassicSimilarity], result of:
          0.055769812 = score(doc=4773,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.35269377 = fieldWeight in 4773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0625 = fieldNorm(doc=4773)
      0.25 = coord(2/8)
    
  10. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.03
    0.028408606 = product of:
      0.11363442 = sum of:
        0.071807064 = weight(_text_:case in 4645) [ClassicSimilarity], result of:
          0.071807064 = score(doc=4645,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.41216385 = fieldWeight in 4645, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
        0.04182736 = weight(_text_:studies in 4645) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4645,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
      0.25 = coord(2/8)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  11. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.02
    0.024461027 = product of:
      0.09784411 = sum of:
        0.033850174 = weight(_text_:case in 1436) [ClassicSimilarity], result of:
          0.033850174 = score(doc=1436,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.1942959 = fieldWeight in 1436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.06399393 = sum of:
          0.04251792 = weight(_text_:area in 1436) [ClassicSimilarity], result of:
            0.04251792 = score(doc=1436,freq=2.0), product of:
              0.1952553 = queryWeight, product of:
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.03962768 = queryNorm
              0.21775553 = fieldWeight in 1436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
          0.021476014 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
            0.021476014 = score(doc=1436,freq=2.0), product of:
              0.13876937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03962768 = queryNorm
              0.15476047 = fieldWeight in 1436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1436)
      0.25 = coord(2/8)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  12. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 4640) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4640,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.04182736 = weight(_text_:studies in 4640) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4640,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
      0.25 = coord(2/8)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
  13. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 4642) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4642,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.04182736 = weight(_text_:studies in 4642) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4642,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
      0.25 = coord(2/8)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
  14. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.02
    0.022777952 = product of:
      0.09111181 = sum of:
        0.0750048 = weight(_text_:libraries in 3387) [ClassicSimilarity], result of:
          0.0750048 = score(doc=3387,freq=14.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.57616633 = fieldWeight in 3387, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
              0.03221402 = score(doc=3387,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 3387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3387)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22
    Source
    Semantic digital libraries. Eds.: S.R. Kruk, B. McDaniel
  15. King, B.E.; Reinold, K.: Finding the concept, not just the word : a librarian's guide to ontologies and semantics (2008) 0.02
    0.022678455 = product of:
      0.060475882 = sum of:
        0.014174575 = weight(_text_:libraries in 2863) [ClassicSimilarity], result of:
          0.014174575 = score(doc=2863,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.1088852 = fieldWeight in 2863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2863)
        0.02538763 = weight(_text_:case in 2863) [ClassicSimilarity], result of:
          0.02538763 = score(doc=2863,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.14572193 = fieldWeight in 2863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2863)
        0.02091368 = weight(_text_:studies in 2863) [ClassicSimilarity], result of:
          0.02091368 = score(doc=2863,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.13226016 = fieldWeight in 2863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2863)
      0.375 = coord(3/8)
    
    Abstract
    Aimed at students and professionals within Library and Information Services (LIS), this book is about the power and potential of ontologies to enhance the electronic search process. The book will compare search strategies and results in the current search environment and demonstrate how these could be transformed using ontologies and concept searching. Simple descriptions, visual representations, and examples of ontologies will bring a full understanding of how these concept maps are constructed to enhance retrieval through natural language queries. Readers will gain a sense of how ontologies are currently being used and how they could be applied in the future, encouraging them to think about how their own work and their users' search experiences could be enhanced by the creation of a customized ontology. Key Features Written by a librarian, for librarians (most work on ontologies is written and read by people in computer science and knowledge management) Written by a librarian who has created her own ontology and performed research on its capabilities Written in easily understandable language, with concepts broken down to the basics The Author Ms. King is the Information Specialist at the Center on Media and Child Health at Children's Hospital Boston. She is a graduate of Smith College (B.A.) and Simmons College (M.L.I.S.). She is an active member of the Special Libraries Association, and was the recipient of the 2005 SLA Innovation in Technology Award for the creation of a customized media effects ontology used for semantic searching. Readership The book is aimed at practicing librarians and information professionals as well as graduate students of Library and Information Science. Contents Introduction Part 1: Understanding Ontologies - organising knowledge; what is an ontology? How are ontologies different from other knowledge representations? How are ontologies currently being used? Key concepts Ontologies in semantic search - determining whether a search was successful; what does semantic search have to offer? Semantic techniques; semantic searching behind the scenes; key concepts Creating an ontology - how to create an ontology; key concepts Building an ontology from existing components - choosing components; customizing your knowledge structure; key concepts Part 2: Semantic Technologies Natural language processing - tagging parts of speech; grammar-based NLP; statistical NLP; semantic analysis,; current applications of NLP; key concepts Using metadata to add semantic information - structured languages; metadata tagging; semantic tagging; key concepts Other semantic capabilities - semantic classification; synsets; topic maps; rules and inference; key concepts Part 3: Case Studies: Theory into Practice Biogen Idec: using semantics in drug discovery research - Biogen Idec's solution; the future The Center on Media and Child Health: using an ontology to explore the effects of media - building the ontology; choosing the source; implementing and comparing to Boolean search; the future Partners HealthCare System: semantic technologies to improve clinical decision support - the medical appointment; partners healthcare system's solution; lessons learned; the future MINDSWAP: using ontologies to aid terrorism; intelligence gathering - building, using and maintaining the ontology; sharing information with other experts; future plans Part 4: Advanced Topics Languages for expressing ontologies - XML; RDF; OWL; SKOS; Ontology language features - comparison chart Tools for building ontologies - basic criteria when evaluating ontologies Part 5: Transitions to the Future
  16. Schwarz, K.: Domain model enhanced search : a comparison of taxonomy, thesaurus and ontology (2005) 0.02
    0.021628782 = product of:
      0.08651513 = sum of:
        0.05863022 = weight(_text_:case in 4569) [ClassicSimilarity], result of:
          0.05863022 = score(doc=4569,freq=6.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.3365304 = fieldWeight in 4569, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=4569)
        0.027884906 = weight(_text_:studies in 4569) [ClassicSimilarity], result of:
          0.027884906 = score(doc=4569,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.17634688 = fieldWeight in 4569, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=4569)
      0.25 = coord(2/8)
    
    Abstract
    The results of this thesis are intended to support the information architect in designing a solution for improved search in a corporate environment. Specifically we have examined the type of search problems that require a domain model to enhance the search process. There are several approaches to modeling a domain. We have considered 3 different types of domain modeling schemes; taxonomy, thesaurus and ontology. The intention is to support the information architect in making an informed choice between one or more of these schemes. In our opinion the main criteria for this choice are the modeling characteristics of a scheme and the suitability for application in the search process. The second chapter is a discussion of modeling characteristics of each scheme, followed by a comparison between them. This should give an information architect an idea of which aspects of a domain can be modeled with each scheme. What is missing here is an indication of the effort required to model a domain with each scheme. There are too many factors that influence the amount of required effort, ranging from measurable factors like domain size and resource characteristics to cultural matters such as the willingness to share knowledge and the existence of a project champion in the team to keep the project running. The third chapter shows what role domain models can play in each part of the search process. This gives an idea of the problems that domain models can solve. We have split the search process into individual parts to show that domain models can be applied very differently in the process. The fourth chapter makes recommendations about the suitability of each individualdomain modeling scheme for improving search. Each scheme has particular characteristics that make it especially suitable for a domain or a search problem. In the appendix each case study is described in detail. These descriptions are intended to serve as a benchmark. The current problem of the enterprise can be compared to those described to see which case study is most similar, which solution was chosen, which problems arose and how they were dealt with. An important issue that we have not touched upon in this thesis is that of maintenance. The real problems of a domain model are revealed when it is applied in a search system and its deficits and wrong assumptions become clear. Adaptation and maintenance are always required. Unfortunately we have not been able to glean sufficient information about maintenance issues from our case studies to draw any meaningful conclusions.
  17. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.02
    0.01950733 = product of:
      0.07802932 = sum of:
        0.059237804 = weight(_text_:case in 3694) [ClassicSimilarity], result of:
          0.059237804 = score(doc=3694,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34001783 = fieldWeight in 3694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3694)
        0.018791512 = product of:
          0.037583023 = sum of:
            0.037583023 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.037583023 = score(doc=3694,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    22. 7.2010 19:41:16
  18. Madalli, D.P.; Balaji, B.P.; Sarangi, A.K.: Music domain analysis for building faceted ontological representation (2014) 0.02
    0.01950733 = product of:
      0.07802932 = sum of:
        0.059237804 = weight(_text_:case in 1437) [ClassicSimilarity], result of:
          0.059237804 = score(doc=1437,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34001783 = fieldWeight in 1437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1437)
        0.018791512 = product of:
          0.037583023 = sum of:
            0.037583023 = weight(_text_:22 in 1437) [ClassicSimilarity], result of:
              0.037583023 = score(doc=1437,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.2708308 = fieldWeight in 1437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1437)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This paper describes to construct faceted ontologies for domain modeling. Building upon the faceted theory of S.R. Ranganathan (1967), the paper intends to address the faceted classification approach applied to build domain ontologies. As classificatory ontologies are employed to represent the relationships of entities and objects on the web, the faceted approach helps to analyze domain representation in an effective way for modeling. Based on this perspective, an ontology of the music domain has been analyzed that would serve as a case study.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  19. Jiang, X.; Tan, A.-H.: CRCTOL: a semantic-based domain ontology learning system (2009) 0.02
    0.019292213 = product of:
      0.07716885 = sum of:
        0.042312715 = weight(_text_:case in 3320) [ClassicSimilarity], result of:
          0.042312715 = score(doc=3320,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 3320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3320)
        0.034856133 = weight(_text_:studies in 3320) [ClassicSimilarity], result of:
          0.034856133 = score(doc=3320,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 3320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3320)
      0.25 = coord(2/8)
    
    Abstract
    Domain ontologies play an important role in supporting knowledge-based applications in the Semantic Web. To facilitate the building of ontologies, text mining techniques have been used to perform ontology learning from texts. However, traditional systems employ shallow natural language processing techniques and focus only on concept and taxonomic relation extraction. In this paper we present a system, known as Concept-Relation-Concept Tuple-based Ontology Learning (CRCTOL), for mining ontologies automatically from domain-specific documents. Specifically, CRCTOL adopts a full text parsing technique and employs a combination of statistical and lexico-syntactic methods, including a statistical algorithm that extracts key concepts from a document collection, a word sense disambiguation algorithm that disambiguates words in the key concepts, a rule-based algorithm that extracts relations between the key concepts, and a modified generalized association rule mining algorithm that prunes unimportant relations for ontology learning. As a result, the ontologies learned by CRCTOL are more concise and contain a richer semantics in terms of the range and number of semantic relations compared with alternative systems. We present two case studies where CRCTOL is used to build a terrorism domain ontology and a sport event domain ontology. At the component level, quantitative evaluation by comparing with Text-To-Onto and its successor Text2Onto has shown that CRCTOL is able to extract concepts and semantic relations with a significantly higher level of accuracy. At the ontology level, the quality of the learned ontologies is evaluated by either employing a set of quantitative and qualitative methods including analyzing the graph structural property, comparison to WordNet, and expert rating, or directly comparing with a human-edited benchmark ontology, demonstrating the high quality of the ontologies learned.
  20. Doerr, M.: ¬The CIDOC CRM, an ontological approach to schema heterogeneity (2005) 0.02
    0.018428948 = product of:
      0.07371579 = sum of:
        0.04182736 = weight(_text_:studies in 1662) [ClassicSimilarity], result of:
          0.04182736 = score(doc=1662,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 1662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1662)
        0.031888437 = product of:
          0.06377687 = sum of:
            0.06377687 = weight(_text_:area in 1662) [ClassicSimilarity], result of:
              0.06377687 = score(doc=1662,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32663327 = fieldWeight in 1662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1662)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The creation of the World Wide Web has had a profound impact an the ease with which information can be distributed and presented. Now with more and more information becoming available, there is an increasing demand for targeted global search, comparative studies, data transfer and data migration between heterogeneous sources of cultural and scholarly contents. This requires interoperability not only at the encoding level - a task solved well by XML for instance - but also at the more complex semantics level, where lie the characteristics of the domain. In the meanwhile, the reality of semantic interoperability is getting frustrating. In the cultural area alone, dozens of "standard" and hundreds of proprietary metadata and data structures exist, as well as hundreds of terminology systems. Core systems like the Dublin Core represent a common denominator by far too small to fulfil advanced requirements. Overstretching its already limited semantics in order to capture complex contents leads to further loss of meaning.

Authors

Years

Languages

  • e 146
  • d 11
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 121
  • el 37
  • m 11
  • x 9
  • n 2
  • p 2
  • s 2
  • r 1
  • More… Less…

Subjects