Search (3 results, page 1 of 1)

  • × classification_ss:"ST 302"
  1. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.05
    0.045525562 = product of:
      0.091051124 = sum of:
        0.091051124 = sum of:
          0.06346643 = weight(_text_:intelligence in 168) [ClassicSimilarity], result of:
            0.06346643 = score(doc=168,freq=2.0), product of:
              0.2703623 = queryWeight, product of:
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.050899457 = queryNorm
              0.23474586 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.3116927 = idf(docFreq=592, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.027584694 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.027584694 = score(doc=168,freq=2.0), product of:
              0.17824122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050899457 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  2. Nagao, M.: Knowledge and inference (1990) 0.04
    0.04434851 = product of:
      0.08869702 = sum of:
        0.08869702 = product of:
          0.17739405 = sum of:
            0.17739405 = weight(_text_:intelligence in 3304) [ClassicSimilarity], result of:
              0.17739405 = score(doc=3304,freq=10.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.6561346 = fieldWeight in 3304, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3304)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of ""knowledge"" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intelligence: search and problem solving, methods of making proofs, and the use of knowledge in looking for a proof. There is also a discussion of how to use the knowledge system. The final chapter describes a popular expert system. It describes tools for building expert systems using an example based on Expert Systems-A Practical Introduction by P. Sell (Macmillian, 1985). This type of software is called an ""expert system shell."" This book was written as a textbook for undergraduate students covering only the basics but explaining as much detail as possible.
    LCSH
    Artificial intelligence
    Subject
    Artificial intelligence
  3. Beierle, C.; Kern-Isberner, G.: Methoden wissensbasierter Systeme : Grundlagen, Algorithmen, Anwendungen (2008) 0.02
    0.015866607 = product of:
      0.031733215 = sum of:
        0.031733215 = product of:
          0.06346643 = sum of:
            0.06346643 = weight(_text_:intelligence in 4622) [ClassicSimilarity], result of:
              0.06346643 = score(doc=4622,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.23474586 = fieldWeight in 4622, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4622)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Studium: Computational intelligence