Search (201 results, page 1 of 11)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.33
    0.32921347 = product of:
      0.75248796 = sum of:
        0.050968137 = product of:
          0.1529044 = sum of:
            0.1529044 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.1529044 = score(doc=400,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.1529044 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1529044 = score(doc=400,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.1529044 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1529044 = score(doc=400,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.0764522 = product of:
          0.1529044 = sum of:
            0.1529044 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.1529044 = score(doc=400,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
        0.1529044 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1529044 = score(doc=400,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.1529044 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1529044 = score(doc=400,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.013450016 = product of:
          0.026900033 = sum of:
            0.026900033 = weight(_text_:ed in 400) [ClassicSimilarity], result of:
              0.026900033 = score(doc=400,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.23573098 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.4375 = coord(7/16)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.25
    0.24809459 = product of:
      0.66158557 = sum of:
        0.03397876 = product of:
          0.10193627 = sum of:
            0.10193627 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.10193627 = score(doc=5820,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.14415966 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14415966 = score(doc=5820,freq=4.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14415966 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14415966 = score(doc=5820,freq=4.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.050968137 = product of:
          0.10193627 = sum of:
            0.10193627 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.10193627 = score(doc=5820,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.5 = coord(1/2)
        0.14415966 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14415966 = score(doc=5820,freq=4.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14415966 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14415966 = score(doc=5820,freq=4.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.375 = coord(6/16)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.18
    0.18475951 = product of:
      0.49269202 = sum of:
        0.03397876 = product of:
          0.10193627 = sum of:
            0.10193627 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.10193627 = score(doc=701,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.10193627 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10193627 = score(doc=701,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.10193627 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10193627 = score(doc=701,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.050968137 = product of:
          0.10193627 = sum of:
            0.10193627 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.10193627 = score(doc=701,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
        0.10193627 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10193627 = score(doc=701,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.10193627 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10193627 = score(doc=701,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.375 = coord(6/16)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Green, R.; Panzer, M.: ¬The ontological character of classes in the Dewey Decimal Classification 0.04
    0.035267882 = product of:
      0.14107153 = sum of:
        0.030953024 = weight(_text_:26 in 3530) [ClassicSimilarity], result of:
          0.030953024 = score(doc=3530,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.27312735 = fieldWeight in 3530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3530)
        0.031475607 = product of:
          0.062951215 = sum of:
            0.062951215 = weight(_text_:rules in 3530) [ClassicSimilarity], result of:
              0.062951215 = score(doc=3530,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.38950738 = fieldWeight in 3530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3530)
          0.5 = coord(1/2)
        0.062951215 = weight(_text_:rules in 3530) [ClassicSimilarity], result of:
          0.062951215 = score(doc=3530,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.38950738 = fieldWeight in 3530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3530)
        0.015691686 = product of:
          0.031383373 = sum of:
            0.031383373 = weight(_text_:ed in 3530) [ClassicSimilarity], result of:
              0.031383373 = score(doc=3530,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.27501947 = fieldWeight in 3530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3530)
          0.5 = coord(1/2)
      0.25 = coord(4/16)
    
    Abstract
    Classes in the Dewey Decimal Classification (DDC) system function as neighborhoods around focal topics in captions and notes. Topical neighborhoods are generated through specialization and instantiation, complex topic synthesis, index terms and mapped headings, hierarchical force, rules for choosing between numbers, development of the DDC over time, and use of the system in classifying resources. Implications of representation using a formal knowledge representation language are explored.
    Source
    Paradigms and conceptual systems in knowledge organization: Proceedings of the Eleventh International ISKO conference, Rome, 23-26 February 2010, ed. Claudio Gnoli, Indeks, Frankfurt M
  5. Siebers, Q.H.J.F.: Implementing inference rules in the Topic maps model (2006) 0.02
    0.024781879 = product of:
      0.19825503 = sum of:
        0.06608501 = product of:
          0.13217002 = sum of:
            0.13217002 = weight(_text_:rules in 4730) [ClassicSimilarity], result of:
              0.13217002 = score(doc=4730,freq=12.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.81779516 = fieldWeight in 4730, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4730)
          0.5 = coord(1/2)
        0.13217002 = weight(_text_:rules in 4730) [ClassicSimilarity], result of:
          0.13217002 = score(doc=4730,freq=12.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.81779516 = fieldWeight in 4730, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=4730)
      0.125 = coord(2/16)
    
    Abstract
    This paper supplies a theoretical approach on implementing inference rules in the Topic Maps model. Topic Maps is an ISO standard that allows for the modeling and representation of knowledge in an interchangeable form, that can be extended by inference rules. These rules specify conditions for inferrable facts. Any implementation requires a syntax for storage in a file, a storage model and method for processing and a system to keep track of changes in the inferred facts. The most flexible and optimisable storage model is a controlled cache, giving options for processing. Keeping track of changes is done by listeners. One of the most powerful applications of inference rules in Topic Maps is interoperability. By mapping ontologies to each other using inference rules as converter, it is possible to exchange extendable knowledge. Any implementation must choose methods and options optimized for the system it runs on, with the facilities available. Further research is required to analyze optimization problems between options.
  6. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.024253089 = product of:
      0.12934981 = sum of:
        0.04021806 = product of:
          0.08043612 = sum of:
            0.08043612 = weight(_text_:rules in 1634) [ClassicSimilarity], result of:
              0.08043612 = score(doc=1634,freq=10.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.49769428 = fieldWeight in 1634, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
        0.08043612 = weight(_text_:rules in 1634) [ClassicSimilarity], result of:
          0.08043612 = score(doc=1634,freq=10.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.49769428 = fieldWeight in 1634, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.008695626 = product of:
          0.017391251 = sum of:
            0.017391251 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.017391251 = score(doc=1634,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  7. Frické, M.: Logic and the organization of information (2012) 0.02
    0.023356954 = product of:
      0.093427815 = sum of:
        0.028886845 = weight(_text_:author in 1782) [ClassicSimilarity], result of:
          0.028886845 = score(doc=1782,freq=2.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.18657295 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.015476512 = weight(_text_:26 in 1782) [ClassicSimilarity], result of:
          0.015476512 = score(doc=1782,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.13656367 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.024532227 = weight(_text_:cataloguing in 1782) [ClassicSimilarity], result of:
          0.024532227 = score(doc=1782,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.17193612 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.024532227 = weight(_text_:cataloguing in 1782) [ClassicSimilarity], result of:
          0.024532227 = score(doc=1782,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.17193612 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.25 = coord(4/16)
    
    Date
    16. 3.2012 11:26:29
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
  8. Sartori, F.; Grazioli, L.: Metadata guiding kowledge engineering : a practical approach (2014) 0.02
    0.020150334 = product of:
      0.10746844 = sum of:
        0.026531162 = weight(_text_:26 in 1572) [ClassicSimilarity], result of:
          0.026531162 = score(doc=1572,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.23410915 = fieldWeight in 1572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.046875 = fieldNorm(doc=1572)
        0.026979093 = product of:
          0.053958185 = sum of:
            0.053958185 = weight(_text_:rules in 1572) [ClassicSimilarity], result of:
              0.053958185 = score(doc=1572,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.33386347 = fieldWeight in 1572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1572)
          0.5 = coord(1/2)
        0.053958185 = weight(_text_:rules in 1572) [ClassicSimilarity], result of:
          0.053958185 = score(doc=1572,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.33386347 = fieldWeight in 1572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=1572)
      0.1875 = coord(3/16)
    
    Abstract
    This paper presents an approach to the analysis, design and development of Knowledge Based Systems based on the Knowledge Artifact concept. Knowledge Artifacts can be meant as means to acquire, represent and maintain knowledge involved in complex problem solving activities. A complex problem is typically made of a huge number of parts that are put together according to a first set of constraints (i.e. the procedural knowledge), dependable on the functional properties it must satisfy, and a second set of rules, dependable on what the expert thinks about the problem and how he/she would represent it. The paper illustrates a way to unify both types of knowledge into a Knowledge Artifact, exploiting Ontologies, Influence Nets and Task Structures formalisms and metadata paradigm.
    Date
    19.12.2014 19:26:51
  9. Frické, M.: Logical division (2016) 0.02
    0.019986346 = product of:
      0.10659385 = sum of:
        0.031795166 = product of:
          0.06359033 = sum of:
            0.06359033 = weight(_text_:rules in 3183) [ClassicSimilarity], result of:
              0.06359033 = score(doc=3183,freq=4.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.39346188 = fieldWeight in 3183, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
        0.06359033 = weight(_text_:rules in 3183) [ClassicSimilarity], result of:
          0.06359033 = score(doc=3183,freq=4.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.39346188 = fieldWeight in 3183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3183)
        0.011208346 = product of:
          0.022416692 = sum of:
            0.022416692 = weight(_text_:ed in 3183) [ClassicSimilarity], result of:
              0.022416692 = score(doc=3183,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.19644247 = fieldWeight in 3183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Content
    Contents: 1. Introduction: Kinds of Division; 2. The Basics of Logical Division; 3. History; 4. Formalization; 5. The Rules; 6. The Status of the Rules; 7. The Process of Logical Division; 8. Conclusion
    Source
    ISKO Encyclopedia of Knowledge Organization, ed. by B. Hjoerland. [http://www.isko.org/cyclo/logical_division]
  10. Nguyen, P.H.P.; Kaneiwa, K.; Nguyen, M.-Q.: Ontology inferencing rules and operations in conceptual structure theory (2010) 0.02
    0.017523436 = product of:
      0.14018749 = sum of:
        0.046729162 = product of:
          0.093458325 = sum of:
            0.093458325 = weight(_text_:rules in 4421) [ClassicSimilarity], result of:
              0.093458325 = score(doc=4421,freq=6.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.5782685 = fieldWeight in 4421, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4421)
          0.5 = coord(1/2)
        0.093458325 = weight(_text_:rules in 4421) [ClassicSimilarity], result of:
          0.093458325 = score(doc=4421,freq=6.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.5782685 = fieldWeight in 4421, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=4421)
      0.125 = coord(2/16)
    
    Abstract
    This paper describes in detail the inferencing rules and operations concerning an ontology formalism previously proposed under Conceptual Structure Theory. The ontology consists of hierarchies of concept, relation and meta-relation types, and formal relationships between them, in particular between arguments of relation and meta-relation types. Inferencing rules are described as well as operations to maintain the ontology in a semantically consistent state at all times. The main aim of the paper is to provide a blue print for the implementation of ontologies in the future Semantic Web.
  11. Wei, W.; Liu, Y.-P.; Wei, L-R.: Feature-level sentiment analysis based on rules and fine-grained domain ontology (2020) 0.02
    0.017523436 = product of:
      0.14018749 = sum of:
        0.046729162 = product of:
          0.093458325 = sum of:
            0.093458325 = weight(_text_:rules in 5876) [ClassicSimilarity], result of:
              0.093458325 = score(doc=5876,freq=6.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.5782685 = fieldWeight in 5876, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5876)
          0.5 = coord(1/2)
        0.093458325 = weight(_text_:rules in 5876) [ClassicSimilarity], result of:
          0.093458325 = score(doc=5876,freq=6.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.5782685 = fieldWeight in 5876, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
      0.125 = coord(2/16)
    
    Abstract
    Mining product reviews and sentiment analysis are of great significance, whether for academic research purposes or optimizing business strategies. We propose a feature-level sentiment analysis framework based on rules parsing and fine-grained domain ontology for Chinese reviews. Fine-grained ontology is used to describe synonymous expressions of product features, which are reflected in word changes in online reviews. First, a semiautomatic construction method is developed by using Word2Vec for fine-grained ontology. Then, featurelevel sentiment analysis that combines rules parsing and the fine-grained domain ontology is conducted to extract explicit and implicit features from product reviews. Finally, the domain sentiment dictionary and context sentiment dictionary are established to identify sentiment polarities for the extracted feature-sentiment combinations. An experiment is conducted on the basis of product reviews crawled from Chinese e-commerce websites. The results demonstrate the effectiveness of our approach.
  12. RDF Semantics (2004) 0.02
    0.016861932 = product of:
      0.13489546 = sum of:
        0.04496515 = product of:
          0.0899303 = sum of:
            0.0899303 = weight(_text_:rules in 3065) [ClassicSimilarity], result of:
              0.0899303 = score(doc=3065,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.5564391 = fieldWeight in 3065, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3065)
          0.5 = coord(1/2)
        0.0899303 = weight(_text_:rules in 3065) [ClassicSimilarity], result of:
          0.0899303 = score(doc=3065,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.5564391 = fieldWeight in 3065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.078125 = fieldNorm(doc=3065)
      0.125 = coord(2/16)
    
    Abstract
    This is a specification of a precise semantics, and corresponding complete systems of inference rules, for the Resource Description Framework (RDF) and RDF Schema (RDFS).
  13. Jansen, L.: Four rules for classifying social entities (2014) 0.02
    0.016791943 = product of:
      0.08955703 = sum of:
        0.022109302 = weight(_text_:26 in 3409) [ClassicSimilarity], result of:
          0.022109302 = score(doc=3409,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.19509095 = fieldWeight in 3409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3409)
        0.022482576 = product of:
          0.04496515 = sum of:
            0.04496515 = weight(_text_:rules in 3409) [ClassicSimilarity], result of:
              0.04496515 = score(doc=3409,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.27821955 = fieldWeight in 3409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3409)
          0.5 = coord(1/2)
        0.04496515 = weight(_text_:rules in 3409) [ClassicSimilarity], result of:
          0.04496515 = score(doc=3409,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.27821955 = fieldWeight in 3409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3409)
      0.1875 = coord(3/16)
    
    Date
    26. 1.2017 9:53:39
  14. Xu, G.; Cao, Y.; Ren, Y.; Li, X.; Feng, Z.: Network security situation awareness based on semantic ontology and user-defined rules for Internet of Things (2017) 0.01
    0.014602864 = product of:
      0.11682291 = sum of:
        0.03894097 = product of:
          0.07788194 = sum of:
            0.07788194 = weight(_text_:rules in 306) [ClassicSimilarity], result of:
              0.07788194 = score(doc=306,freq=6.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.48189044 = fieldWeight in 306, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=306)
          0.5 = coord(1/2)
        0.07788194 = weight(_text_:rules in 306) [ClassicSimilarity], result of:
          0.07788194 = score(doc=306,freq=6.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.48189044 = fieldWeight in 306, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=306)
      0.125 = coord(2/16)
    
    Abstract
    Internet of Things (IoT) brings the third development wave of the global information industry which makes users, network and perception devices cooperate more closely. However, if IoT has security problems, it may cause a variety of damage and even threaten human lives and properties. To improve the abilities of monitoring, providing emergency response and predicting the development trend of IoT security, a new paradigm called network security situation awareness (NSSA) is proposed. However, it is limited by its ability to mine and evaluate security situation elements from multi-source heterogeneous network security information. To solve this problem, this paper proposes an IoT network security situation awareness model using situation reasoning method based on semantic ontology and user-defined rules. Ontology technology can provide a unified and formalized description to solve the problem of semantic heterogeneity in the IoT security domain. In this paper, four key sub-domains are proposed to reflect an IoT security situation: context, attack, vulnerability and network flow. Further, user-defined rules can compensate for the limited description ability of ontology, and hence can enhance the reasoning ability of our proposed ontology model. The examples in real IoT scenarios show that the ability of the network security situation awareness that adopts our situation reasoning method is more comprehensive and more powerful reasoning abilities than the traditional NSSA methods. [http://ieeexplore.ieee.org/abstract/document/7999187/]
  15. Pankowski, T.: Ontological databases with faceted queries (2022) 0.01
    0.014602864 = product of:
      0.11682291 = sum of:
        0.03894097 = product of:
          0.07788194 = sum of:
            0.07788194 = weight(_text_:rules in 666) [ClassicSimilarity], result of:
              0.07788194 = score(doc=666,freq=6.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.48189044 = fieldWeight in 666, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=666)
          0.5 = coord(1/2)
        0.07788194 = weight(_text_:rules in 666) [ClassicSimilarity], result of:
          0.07788194 = score(doc=666,freq=6.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.48189044 = fieldWeight in 666, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=666)
      0.125 = coord(2/16)
    
    Abstract
    The success of the use of ontology-based systems depends on efficient and user-friendly methods of formulating queries against the ontology. We propose a method to query a class of ontologies, called facet ontologies ( fac-ontologies ), using a faceted human-oriented approach. A fac-ontology has two important features: (a) a hierarchical view of it can be defined as a nested facet over this ontology and the view can be used as a faceted interface to create queries and to explore the ontology; (b) the ontology can be converted into an ontological database , the ABox of which is stored in a database, and the faceted queries are evaluated against this database. We show that the proposed faceted interface makes it possible to formulate queries that are semantically equivalent to $${\mathcal {SROIQ}}^{Fac}$$ SROIQ Fac , a limited version of the $${\mathcal {SROIQ}}$$ SROIQ description logic. The TBox of a fac-ontology is divided into a set of rules defining intensional predicates and a set of constraint rules to be satisfied by the database. We identify a class of so-called reflexive weak cycles in a set of constraint rules and propose a method to deal with them in the chase procedure. The considerations are illustrated with solutions implemented in the DAFO system ( data access based on faceted queries over ontologies ).
  16. King, B.E.; Reinold, K.: Finding the concept, not just the word : a librarian's guide to ontologies and semantics (2008) 0.01
    0.012230398 = product of:
      0.06522879 = sum of:
        0.024760153 = weight(_text_:author in 2863) [ClassicSimilarity], result of:
          0.024760153 = score(doc=2863,freq=2.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.15991968 = fieldWeight in 2863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2863)
        0.013489546 = product of:
          0.026979093 = sum of:
            0.026979093 = weight(_text_:rules in 2863) [ClassicSimilarity], result of:
              0.026979093 = score(doc=2863,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.16693173 = fieldWeight in 2863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2863)
          0.5 = coord(1/2)
        0.026979093 = weight(_text_:rules in 2863) [ClassicSimilarity], result of:
          0.026979093 = score(doc=2863,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.16693173 = fieldWeight in 2863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2863)
      0.1875 = coord(3/16)
    
    Abstract
    Aimed at students and professionals within Library and Information Services (LIS), this book is about the power and potential of ontologies to enhance the electronic search process. The book will compare search strategies and results in the current search environment and demonstrate how these could be transformed using ontologies and concept searching. Simple descriptions, visual representations, and examples of ontologies will bring a full understanding of how these concept maps are constructed to enhance retrieval through natural language queries. Readers will gain a sense of how ontologies are currently being used and how they could be applied in the future, encouraging them to think about how their own work and their users' search experiences could be enhanced by the creation of a customized ontology. Key Features Written by a librarian, for librarians (most work on ontologies is written and read by people in computer science and knowledge management) Written by a librarian who has created her own ontology and performed research on its capabilities Written in easily understandable language, with concepts broken down to the basics The Author Ms. King is the Information Specialist at the Center on Media and Child Health at Children's Hospital Boston. She is a graduate of Smith College (B.A.) and Simmons College (M.L.I.S.). She is an active member of the Special Libraries Association, and was the recipient of the 2005 SLA Innovation in Technology Award for the creation of a customized media effects ontology used for semantic searching. Readership The book is aimed at practicing librarians and information professionals as well as graduate students of Library and Information Science. Contents Introduction Part 1: Understanding Ontologies - organising knowledge; what is an ontology? How are ontologies different from other knowledge representations? How are ontologies currently being used? Key concepts Ontologies in semantic search - determining whether a search was successful; what does semantic search have to offer? Semantic techniques; semantic searching behind the scenes; key concepts Creating an ontology - how to create an ontology; key concepts Building an ontology from existing components - choosing components; customizing your knowledge structure; key concepts Part 2: Semantic Technologies Natural language processing - tagging parts of speech; grammar-based NLP; statistical NLP; semantic analysis,; current applications of NLP; key concepts Using metadata to add semantic information - structured languages; metadata tagging; semantic tagging; key concepts Other semantic capabilities - semantic classification; synsets; topic maps; rules and inference; key concepts Part 3: Case Studies: Theory into Practice Biogen Idec: using semantics in drug discovery research - Biogen Idec's solution; the future The Center on Media and Child Health: using an ontology to explore the effects of media - building the ontology; choosing the source; implementing and comparing to Boolean search; the future Partners HealthCare System: semantic technologies to improve clinical decision support - the medical appointment; partners healthcare system's solution; lessons learned; the future MINDSWAP: using ontologies to aid terrorism; intelligence gathering - building, using and maintaining the ontology; sharing information with other experts; future plans Part 4: Advanced Topics Languages for expressing ontologies - XML; RDF; OWL; SKOS; Ontology language features - comparison chart Tools for building ontologies - basic criteria when evaluating ontologies Part 5: Transitions to the Future
  17. Buxton, A.: Ontologies and classification of chemicals : can they help each other? (2011) 0.01
    0.011923187 = product of:
      0.0953855 = sum of:
        0.031795166 = product of:
          0.06359033 = sum of:
            0.06359033 = weight(_text_:rules in 4817) [ClassicSimilarity], result of:
              0.06359033 = score(doc=4817,freq=4.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.39346188 = fieldWeight in 4817, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4817)
          0.5 = coord(1/2)
        0.06359033 = weight(_text_:rules in 4817) [ClassicSimilarity], result of:
          0.06359033 = score(doc=4817,freq=4.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.39346188 = fieldWeight in 4817, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4817)
      0.125 = coord(2/16)
    
    Abstract
    The chemistry schedule in the Universal Decimal Classification (UDC) is badly in need of revision. In many places it is enumerative rather than synthetic (giving rules for constructing numbers for any compound required). In principle, chemistry should be the ideal subject for a synthetic classification but many common compounds have complex formulae and a synthetic system becomes unwieldy. Also, all compounds belong to several hierarchies, e.g. chloroquin is a heterocycle, an aromatic compound, amine, antimalarial drug, etc. and rules need to be drawn up as to which ones take precedence and which ones should be taken into account in classifying a compound. There are obvious similarities between a classification and an ontology. This paper looks at existing ontologies for chemistry, especially ChEBI which is one of the largest, to examine how a classification and an ontology might draw on each other and what the problem areas are. An ontology might help in creating an index to a classification (for chemicals not listed or to provide access by facets not used in the classification) and a classification could provide a hierarchy to use in an ontology.
  18. Vlachidis, A.; Tudhope, D.: ¬A knowledge-based approach to information extraction for semantic interoperability in the archaeology domain (2016) 0.01
    0.011923187 = product of:
      0.0953855 = sum of:
        0.031795166 = product of:
          0.06359033 = sum of:
            0.06359033 = weight(_text_:rules in 2895) [ClassicSimilarity], result of:
              0.06359033 = score(doc=2895,freq=4.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.39346188 = fieldWeight in 2895, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2895)
          0.5 = coord(1/2)
        0.06359033 = weight(_text_:rules in 2895) [ClassicSimilarity], result of:
          0.06359033 = score(doc=2895,freq=4.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.39346188 = fieldWeight in 2895, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2895)
      0.125 = coord(2/16)
    
    Abstract
    The article presents a method for automatic semantic indexing of archaeological grey-literature reports using empirical (rule-based) Information Extraction techniques in combination with domain-specific knowledge organization systems. The semantic annotation system (OPTIMA) performs the tasks of Named Entity Recognition, Relation Extraction, Negation Detection, and Word-Sense Disambiguation using hand-crafted rules and terminological resources for associating contextual abstractions with classes of the standard ontology CIDOC Conceptual Reference Model (CRM) for cultural heritage and its archaeological extension, CRM-EH. Relation Extraction (RE) performance benefits from a syntactic-based definition of RE patterns derived from domain oriented corpus analysis. The evaluation also shows clear benefit in the use of assistive natural language processing (NLP) modules relating to Word-Sense Disambiguation, Negation Detection, and Noun Phrase Validation, together with controlled thesaurus expansion. The semantic indexing results demonstrate the capacity of rule-based Information Extraction techniques to deliver interoperable semantic abstractions (semantic annotations) with respect to the CIDOC CRM and archaeological thesauri. Major contributions include recognition of relevant entities using shallow parsing NLP techniques driven by a complimentary use of ontological and terminological domain resources and empirical derivation of context-driven RE rules for the recognition of semantic relationships from phrases of unstructured text.
  19. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.01
    0.011923187 = product of:
      0.0953855 = sum of:
        0.031795166 = product of:
          0.06359033 = sum of:
            0.06359033 = weight(_text_:rules in 3936) [ClassicSimilarity], result of:
              0.06359033 = score(doc=3936,freq=4.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.39346188 = fieldWeight in 3936, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3936)
          0.5 = coord(1/2)
        0.06359033 = weight(_text_:rules in 3936) [ClassicSimilarity], result of:
          0.06359033 = score(doc=3936,freq=4.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.39346188 = fieldWeight in 3936, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
      0.125 = coord(2/16)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
  20. Advances in ontologies : Proceedings of the Sixth Australasian Ontology Workshop Adelaide, Australia, 7 December 2010 (2010) 0.01
    0.01011716 = product of:
      0.08093728 = sum of:
        0.026979093 = product of:
          0.053958185 = sum of:
            0.053958185 = weight(_text_:rules in 4420) [ClassicSimilarity], result of:
              0.053958185 = score(doc=4420,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.33386347 = fieldWeight in 4420, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4420)
          0.5 = coord(1/2)
        0.053958185 = weight(_text_:rules in 4420) [ClassicSimilarity], result of:
          0.053958185 = score(doc=4420,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.33386347 = fieldWeight in 4420, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=4420)
      0.125 = coord(2/16)
    
    Content
    Inhalt YAMATO: Yet Another More Advanced Top-level Ontology (invited talk) - Riichiro Mizoguchi A Visual Analytics Approach to Augmenting Formal Concepts with Relational Background Knowledge in a Biological Domain - Elma Akand, Michael Bain, Mark Temple Combining Ontologies And Natural Language - Wolf Fischer, Bernhard Bauer Comparison of Thesauri and Ontologies from a Semiotic Perspective - Daniel Kless, Simon Milton Fast Classification in Protégé: Snorocket as an OWL2 EL Reasoner - Michael Lawley, Cyril Bousquet Ontological Support for Consistency Checking of Engineering Design Workflows - Franz Maier, Wolfgang Mayer, Markus Stumptner Ontology Inferencing Rules and Operations in Conceptual Structure Theory - Philip H.P. Nguyen, Ken Kaneiwa, Minh-Quang Nguyen An Axiomatisation of Basic Formal Ontology with Projection Functions - Kerry Trentelman, Barry Smith Making Sense of Spreadsheet Data: A Case of Semantic Water Data Translation - Yanfeng Shu, David Ratcliffe, Geoffrey Squire, Michael Compton

Authors

Years

Languages

  • e 174
  • d 21
  • pt 2
  • More… Less…

Types

  • a 155
  • el 35
  • m 19
  • x 10
  • s 7
  • p 2
  • n 1
  • r 1
  • More… Less…

Subjects

Classifications