Search (220 results, page 1 of 11)

  • × theme_ss:"Wissensrepräsentation"
  1. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.21
    0.21262538 = product of:
      0.42525077 = sum of:
        0.042344213 = product of:
          0.12703264 = sum of:
            0.12703264 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12703264 = score(doc=5820,freq=2.0), product of:
                0.33904418 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039991006 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17965128 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17965128 = score(doc=5820,freq=4.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.17965128 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17965128 = score(doc=5820,freq=4.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.023603994 = weight(_text_:computer in 5820) [ClassicSimilarity], result of:
          0.023603994 = score(doc=5820,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.16150802 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(4/8)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  2. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.17
    0.16673034 = product of:
      0.44461423 = sum of:
        0.06351632 = product of:
          0.19054894 = sum of:
            0.19054894 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.19054894 = score(doc=400,freq=2.0), product of:
                0.33904418 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039991006 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.19054894 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19054894 = score(doc=400,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.19054894 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19054894 = score(doc=400,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.375 = coord(3/8)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.11
    0.11115356 = product of:
      0.2964095 = sum of:
        0.042344213 = product of:
          0.12703264 = sum of:
            0.12703264 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12703264 = score(doc=701,freq=2.0), product of:
                0.33904418 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039991006 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12703264 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12703264 = score(doc=701,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12703264 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12703264 = score(doc=701,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.375 = coord(3/8)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Frisch, A.M.; Allen, J.F.: Knowledge retrieval as limited inference (1982) 0.07
    0.07289844 = product of:
      0.19439584 = sum of:
        0.1064127 = weight(_text_:property in 5804) [ClassicSimilarity], result of:
          0.1064127 = score(doc=5804,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.4199946 = fieldWeight in 5804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.046875 = fieldNorm(doc=5804)
        0.035405993 = weight(_text_:computer in 5804) [ClassicSimilarity], result of:
          0.035405993 = score(doc=5804,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24226204 = fieldWeight in 5804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=5804)
        0.052577145 = weight(_text_:network in 5804) [ClassicSimilarity], result of:
          0.052577145 = score(doc=5804,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.29521978 = fieldWeight in 5804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=5804)
      0.375 = coord(3/8)
    
    Abstract
    Artificial intelligence reasoning systems commonly employ a knowledge base module that stores a set of facts expressed in a representation language and provides facilities to retrieve these facts. A retriever could range from a simple pattern matcher to a complete logical inference system. In practice, most fall in between these extremes, providing some forms of inference but not others. Unfortunately, most of these retrievers are not precisely defined. We view knowledge retrieval as a limited form of inference operating on the stored facts. This paper is concerned with our method of using first-order predicate calculus to formally specify a limited inference mechanism and to a lesser extent with the techniques for producing an efficient program that meets the specification. Our ideas are illustrated by developing a simplified version of a retriever used in the knowledge base of the Rochester Dialog System. The interesting property of this retriever is that it perlorms typical semantic network inferences such as inheritance but not arbitrary logical inferences such as modus ponens.
    Series
    Lecture notes in computer science; vol 138
  5. Innovations and advanced techniques in systems, computing sciences and software engineering (2008) 0.04
    0.038816433 = product of:
      0.15526573 = sum of:
        0.09330297 = weight(_text_:computer in 4319) [ClassicSimilarity], result of:
          0.09330297 = score(doc=4319,freq=20.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.63841647 = fieldWeight in 4319, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.061962765 = weight(_text_:network in 4319) [ClassicSimilarity], result of:
          0.061962765 = score(doc=4319,freq=4.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.34791988 = fieldWeight in 4319, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
      0.25 = coord(2/8)
    
    Abstract
    Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes selected papers form the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
    Content
    Inhalt: Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language Processing, Neural Networks, and Online Decision Support System
    LCSH
    Computer Science
    Computer Systems Organization and Communication Networks
    Computer network architectures
    Subject
    Computer Science
    Computer Systems Organization and Communication Networks
    Computer network architectures
  6. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.04
    0.036301516 = product of:
      0.14520606 = sum of:
        0.08867725 = weight(_text_:property in 633) [ClassicSimilarity], result of:
          0.08867725 = score(doc=633,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.3499955 = fieldWeight in 633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.05652882 = sum of:
          0.029437674 = weight(_text_:resources in 633) [ClassicSimilarity], result of:
            0.029437674 = score(doc=633,freq=2.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.20165458 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.027091147 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.027091147 = score(doc=633,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.25 = coord(2/8)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  7. Hoekstra, R.: BestMap: context-aware SKOS vocabulary mappings in OWL 2 (2009) 0.04
    0.03618863 = product of:
      0.14475451 = sum of:
        0.124148145 = weight(_text_:property in 1574) [ClassicSimilarity], result of:
          0.124148145 = score(doc=1574,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.4899937 = fieldWeight in 1574, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1574)
        0.020606373 = product of:
          0.041212745 = sum of:
            0.041212745 = weight(_text_:resources in 1574) [ClassicSimilarity], result of:
              0.041212745 = score(doc=1574,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.28231642 = fieldWeight in 1574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1574)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This paper describes an approach to SKOS vocabulary mapping that takes into account the context in which vocabulary terms are used in annotations. The standard vocabulary mapping properties in SKOS only allow for binary mappings between concepts. In the BestMap ontology, annotated resources are the contexts in which annotations coincide and allow for a more fine grained control over when mappings hold. A mapping between two vocabularies is defined as a class that groups descriptions of a resource. We use the OWL 2 features for property chains, disjoint properties, union, intersection and negation together with careful use of equivalence and subsumption to specify these mappings.
  8. Giri, K.; Gokhale, P.: Developing a banking service ontology using Protégé, an open source software (2015) 0.03
    0.033014297 = product of:
      0.088038124 = sum of:
        0.029504994 = weight(_text_:computer in 2793) [ClassicSimilarity], result of:
          0.029504994 = score(doc=2793,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 2793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
        0.04381429 = weight(_text_:network in 2793) [ClassicSimilarity], result of:
          0.04381429 = score(doc=2793,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.2460165 = fieldWeight in 2793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
        0.014718837 = product of:
          0.029437674 = sum of:
            0.029437674 = weight(_text_:resources in 2793) [ClassicSimilarity], result of:
              0.029437674 = score(doc=2793,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.20165458 = fieldWeight in 2793, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2793)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Computers have transformed from single isolated devices to entry points into a worldwide network of information exchange. Consequently, support in the exchange of data, information, and knowledge is becoming the key issue in computer technology today. The increasing volume of data available on the Web makes information retrieval a tedious and difficult task. Researchers are now exploring the possibility of creating a semantic web, in which meaning is made explicit, allowing machines to process and integrate web resources intelligently. The vision of the semantic web introduces the next generation of the Web by establishing a layer of machine-understandable data. The success of the semantic web depends on the easy creation, integration and use of semantic data, which will depend on web ontology. The faceted approach towards analyzing and representing knowledge given by S R Ranganathan would be useful in this regard. Ontology development in different fields is one such area where this approach given by Ranganathan could be applied. This paper presents a case of developing ontology for the field of banking.
  9. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.03
    0.02954556 = product of:
      0.11818224 = sum of:
        0.08867725 = weight(_text_:property in 3936) [ClassicSimilarity], result of:
          0.08867725 = score(doc=3936,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.3499955 = fieldWeight in 3936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
        0.029504994 = weight(_text_:computer in 3936) [ClassicSimilarity], result of:
          0.029504994 = score(doc=3936,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 3936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
      0.25 = coord(2/8)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
  10. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.03
    0.025810145 = product of:
      0.10324058 = sum of:
        0.035405993 = weight(_text_:computer in 2418) [ClassicSimilarity], result of:
          0.035405993 = score(doc=2418,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24226204 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.067834586 = sum of:
          0.03532521 = weight(_text_:resources in 2418) [ClassicSimilarity], result of:
            0.03532521 = score(doc=2418,freq=2.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.2419855 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.032509375 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
            0.032509375 = score(doc=2418,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.23214069 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
      0.25 = coord(2/8)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Series
    Lecture notes in computer science; vol.4172
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  11. Haslhofer, B.; Knezevié, P.: ¬The BRICKS digital library infrastructure (2009) 0.03
    0.025695061 = product of:
      0.102780245 = sum of:
        0.081964664 = weight(_text_:europe in 3384) [ClassicSimilarity], result of:
          0.081964664 = score(doc=3384,freq=2.0), product of:
            0.24358861 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.039991006 = queryNorm
            0.33648807 = fieldWeight in 3384, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
        0.02081558 = product of:
          0.04163116 = sum of:
            0.04163116 = weight(_text_:resources in 3384) [ClassicSimilarity], result of:
              0.04163116 = score(doc=3384,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.28518265 = fieldWeight in 3384, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3384)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Service-oriented architectures, and the wider acceptance of decentralized peer-to-peer architectures enable the transition from integrated, centrally controlled systems to federated and dynamic configurable systems. The benefits for the individual service providers and users are robustness of the system, independence of central authorities and flexibility in the usage of services. This chapter provides details of the European project BRICKS, which aims at enabling integrated access to distributed resources in the Cultural Heritage domain. The target audience is broad and heterogeneous and involves cultural heritage and educational institutions, the research community, industry, and the general public. The project idea is motivated by the fact that the amount of digital information and digitized content is continuously increasing but still much effort has to be expended to discover and access it. The reasons for such a situation are heterogeneous data formats, restricted access, proprietary access interfaces, etc. Typical usage scenarios are integrated queries among several knowledge resource, e.g. to discover all Italian artifacts from the Renaissance in European museums. Another example is to follow the life cycle of historic documents, whose physical copies are distributed all over Europe. A standard method for integrated access is to place all available content and metadata in a central place. Unfortunately, such a solution requires a quite powerful and costly infrastructure if the volume of data is large. Considerations of cost optimization are highly important for Cultural Heritage institutions, especially if they are funded from public money. Therefore, better usage of the existing resources, i.e. a decentralized/P2P approach promises to deliver a significantly less costly system,and does not mean sacrificing too much on the performance side.
  12. Drexel, G.: Knowledge engineering for intelligent information retrieval (2001) 0.02
    0.021995785 = product of:
      0.08798314 = sum of:
        0.035405993 = weight(_text_:computer in 4043) [ClassicSimilarity], result of:
          0.035405993 = score(doc=4043,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24226204 = fieldWeight in 4043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4043)
        0.052577145 = weight(_text_:network in 4043) [ClassicSimilarity], result of:
          0.052577145 = score(doc=4043,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.29521978 = fieldWeight in 4043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=4043)
      0.25 = coord(2/8)
    
    Abstract
    This paper presents a clustered approach to designing an overall ontological model together with a general rule-based component that serves as a mapping device. By observational criteria, a multi-lingual team of experts excerpts concepts from general communication in the media. The team, then, finds equivalent expressions in English, German, French, and Spanish. On the basis of a set of ontological and lexical relations, a conceptual network is built up. Concepts are thought to be universal. Objects unique in time and space are identified by names and will be explained by the universals as their instances. Our approach relies on multi-relational descriptions of concepts. It provides a powerful tool for documentation and conceptual language learning. First and foremost, our multi-lingual, polyhierarchical ontology fills the gap of semantically-based information retrieval by generating enhanced and improved queries for internet search
    Series
    Lecture notes in computer science; vol.2004
  13. Quillian, M.R.: Word concepts : a theory and simulation of some basic semantic capabilities. (1967) 0.02
    0.021995785 = product of:
      0.08798314 = sum of:
        0.035405993 = weight(_text_:computer in 4414) [ClassicSimilarity], result of:
          0.035405993 = score(doc=4414,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24226204 = fieldWeight in 4414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4414)
        0.052577145 = weight(_text_:network in 4414) [ClassicSimilarity], result of:
          0.052577145 = score(doc=4414,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.29521978 = fieldWeight in 4414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=4414)
      0.25 = coord(2/8)
    
    Abstract
    In order to discover design principles for a large memory that can enable it to serve as the base of knowledge underlying human-like language behavior, experiments with a model memory are being performed. This model is built up within a computer by "recoding" a body of information from an ordinary dictionary into a complex network of elements and associations interconnecting them. Then, the ability of a program to use the resulting model memory effectively for simulating human performance provides a test of its design. One simulation program, now running, is given the model memory and is required to compare and contrast the meanings of arbitrary pairs of English words. For each pair, the program locates any relevant semantic information within the model memory, draws inferences on the basis of this, and thereby discovers various relationships between the meanings of the two words. Finally, it creates English text to express its conclusions. The design principles embodied in the memory model, together with some of the methods used by the program, constitute a theory of how human memory for semantic and other conceptual material may be formatted, organized, and used.
  14. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.021525284 = product of:
      0.08610114 = sum of:
        0.059009988 = weight(_text_:computer in 4523) [ClassicSimilarity], result of:
          0.059009988 = score(doc=4523,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.40377006 = fieldWeight in 4523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=4523)
        0.027091147 = product of:
          0.054182295 = sum of:
            0.054182295 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.054182295 = score(doc=4523,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  15. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.02
    0.020694586 = product of:
      0.08277834 = sum of:
        0.061962765 = weight(_text_:network in 2257) [ClassicSimilarity], result of:
          0.061962765 = score(doc=2257,freq=4.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.34791988 = fieldWeight in 2257, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
        0.02081558 = product of:
          0.04163116 = sum of:
            0.04163116 = weight(_text_:resources in 2257) [ClassicSimilarity], result of:
              0.04163116 = score(doc=2257,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.28518265 = fieldWeight in 2257, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2257)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The Future of Information Architecture examines issues surrounding why information is processed, stored and applied in the way that it has, since time immemorial. Contrary to the conventional wisdom held by many scholars in human history, the recurrent debate on the explanation of the most basic categories of information (eg space, time causation, quality, quantity) has been misconstrued, to the effect that there exists some deeper categories and principles behind these categories of information - with enormous implications for our understanding of reality in general. To understand this, the book is organised in to four main parts: Part I begins with the vital question concerning the role of information within the context of the larger theoretical debate in the literature. Part II provides a critical examination of the nature of data taxonomy from the main perspectives of culture, society, nature and the mind. Part III constructively invesitgates the world of information network from the main perspectives of culture, society, nature and the mind. Part IV proposes six main theses in the authors synthetic theory of information architecture, namely, (a) the first thesis on the simpleness-complicatedness principle, (b) the second thesis on the exactness-vagueness principle (c) the third thesis on the slowness-quickness principle (d) the fourth thesis on the order-chaos principle, (e) the fifth thesis on the symmetry-asymmetry principle, and (f) the sixth thesis on the post-human stage.
    LCSH
    Information resources
    Subject
    Information resources
  16. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.02
    0.019808577 = product of:
      0.052822873 = sum of:
        0.017702997 = weight(_text_:computer in 1169) [ClassicSimilarity], result of:
          0.017702997 = score(doc=1169,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.12113102 = fieldWeight in 1169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.026288573 = weight(_text_:network in 1169) [ClassicSimilarity], result of:
          0.026288573 = score(doc=1169,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.14760989 = fieldWeight in 1169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.008831303 = product of:
          0.017662605 = sum of:
            0.017662605 = weight(_text_:resources in 1169) [ClassicSimilarity], result of:
              0.017662605 = score(doc=1169,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.12099275 = fieldWeight in 1169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1169)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    T.1: Today's thesauri are mostly electronic tools, having moved on from the paper-based era when thesaurus standards were first developed. They are built and maintained with the support of software and need to integrate with other software, such as search engines and content management systems. Whereas in the past thesauri were designed for information professionals trained in indexing and searching, today there is a demand for vocabularies that untrained users will find to be intuitive. ISO 25964 makes the transition needed for the world of electronic information management. However, part 1 retains the assumption that human intellect is usually involved in the selection of indexing terms and in the selection of search terms. If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design, even though a thesaurus built for human users may also be applied in situations where computers make the choices. Efficient exchange of data is a vital component of thesaurus management and exploitation. Hence the inclusion in this standard of recommendations for exchange formats and protocols. Adoption of these will facilitate interoperability between thesaurus management systems and the other computer applications, such as indexing and retrieval systems, that will utilize the data. Thesauri are typically used in post-coordinate retrieval systems, but may also be applied to hierarchical directories, pre-coordinate indexes and classification systems. Increasingly, thesaurus applications need to mesh with others, such as automatic categorization schemes, free-text search systems, etc. Part 2 of ISO 25964 describes additional types of structured vocabulary and gives recommendations to enable interoperation of the vocabularies at all stages of the information storage and retrieval process.
    T.2: The ability to identify and locate relevant information among vast collections and other resources is a major and pressing challenge today. Several different types of vocabulary are in use for this purpose. Some of the most widely used vocabularies were designed a hundred years ago and have been evolving steadily. A different generation of vocabularies is now emerging, designed to exploit the electronic media more effectively. A good understanding of the previous generation is still essential for effective access to collections indexed with them. An important object of ISO 25964 as a whole is to support data exchange and other forms of interoperability in circumstances in which more than one structured vocabulary is applied within one retrieval system or network. Sometimes one vocabulary has to be mapped to another, and it is important to understand both the potential and the limitations of such mappings. In other systems, a thesaurus is mapped to a classification scheme, or an ontology to a thesaurus. Comprehensive interoperability needs to cover the whole range of vocabulary types, whether young or old. Concepts in different vocabularies are related only in that they have the same or similar meaning. However, the meaning can be found in a number of different aspects within each particular type of structured vocabulary: - within terms or captions selected in different languages; - in the notation assigned indicating a place within a larger hierarchy; - in the definition, scope notes, history notes and other notes that explain the significance of that concept; and - in explicit relationships to other concepts or entities within the same vocabulary. In order to create mappings from one structured vocabulary to another it is first necessary to understand, within the context of each different type of structured vocabulary, the significance and relative importance of each of the different elements in defining the meaning of that particular concept. ISO 25964-1 describes the key characteristics of thesauri along with additional advice on best practice. ISO 25964-2 focuses on other types of vocabulary and does not attempt to cover all aspects of good practice. It concentrates on those aspects which need to be understood if one of the vocabularies is to work effectively alongside one or more of the others. Recognizing that a new standard cannot be applied to some existing vocabularies, this part of ISO 25964 provides informative description alongside the recommendations, the aim of which is to enable users and system developers to interpret and implement the existing vocabularies effectively. The remainder of ISO 25964-2 deals with the principles and practicalities of establishing mappings between vocabularies.
  17. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.02
    0.019102048 = product of:
      0.07640819 = sum of:
        0.06557173 = weight(_text_:europe in 1436) [ClassicSimilarity], result of:
          0.06557173 = score(doc=1436,freq=2.0), product of:
            0.24358861 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.039991006 = queryNorm
            0.26919046 = fieldWeight in 1436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.010836459 = product of:
          0.021672918 = sum of:
            0.021672918 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
              0.021672918 = score(doc=1436,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.15476047 = fieldWeight in 1436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1436)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.02
    0.018891187 = product of:
      0.07556475 = sum of:
        0.052577145 = weight(_text_:network in 3355) [ClassicSimilarity], result of:
          0.052577145 = score(doc=3355,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.29521978 = fieldWeight in 3355, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.022987602 = product of:
          0.045975205 = sum of:
            0.045975205 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.045975205 = score(doc=3355,freq=4.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    One of a series of three publications influenced by the travelling exhibit Places & Spaces: Mapping Science, curated by the Cyberinfrastructure for Network Science Center at Indiana University. - Additional materials can be found at http://http://scimaps.org/atlas2. Erweitert durch: Börner, Katy. Atlas of Science: Visualizing What We Know.
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  19. Curras, E.: Ontologies, taxonomy and thesauri in information organisation and retrieval (2010) 0.02
    0.018329822 = product of:
      0.073319286 = sum of:
        0.029504994 = weight(_text_:computer in 3276) [ClassicSimilarity], result of:
          0.029504994 = score(doc=3276,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 3276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3276)
        0.04381429 = weight(_text_:network in 3276) [ClassicSimilarity], result of:
          0.04381429 = score(doc=3276,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.2460165 = fieldWeight in 3276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3276)
      0.25 = coord(2/8)
    
    Abstract
    The originality of this book, which deals with such a new subject matter, lies in the application of methods and concepts never used before - such as Ontologies and Taxonomies, as well as Thesauri - to the ordering of knowledge based on primary information. Chapters in the book also examine the study of Ontologies, Taxonomies and Thesauri from the perspective of Systematics and General Systems Theory. "Ontologies, Taxonomy and Thesauri in Information Organisation and Retrieval" will be extremely useful to those operating within the network of related fields, which includes Documentation and Information Science.
    Content
    Inhalt: 1. From classifications to ontologies Knowledge - A new concept of knowledge - Knowledge and information - Knowledge organisation - Knowledge organisation and representation - Cognitive sciences - Talent management - Learning systematisation - Historical evolution - From classification to knowledge organisation - Why ontologies exist - Ontologies - The structure of ontologies 2. Taxonomies and thesauri From ordering to taxonomy - The origins of taxonomy - Hierarchical and horizontal order - Correlation with classifications - Taxonomy in computer science - Computing taxonomy - Definitions - Virtual taxonomy, cybernetic taxonomy - Taxonomy in Information Science - Similarities between taxonomies and thesauri - ifferences between taxonomies and thesauri 3. Thesauri Terminology in classification systems - Terminological languages - Thesauri - Thesauri definitions - Conditions that a thesaurus must fulfil - Historical evolution - Classes of thesauri 4. Thesauri in (cladist) systematics Systematics - Systematics as a noun - Definitions and historic evolution over time - Differences between taxonomy and systematics - Systematics in thesaurus construction theory - Classic, numerical and cladist systematics - Classic systematics in information science - Numerical systematics in information science - Thesauri in cladist systematics - Systematics in information technology - Some examples 5. Thesauri in systems theory Historical evolution - Approach to systems - Systems theory applied to the construction of thesauri - Components - Classes of system - Peculiarities of these systems - Working methods - Systems theory applied to ontologies and taxonomies
  20. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.02
    0.01658158 = product of:
      0.06632632 = sum of:
        0.05007163 = weight(_text_:computer in 4820) [ClassicSimilarity], result of:
          0.05007163 = score(doc=4820,freq=4.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.34261024 = fieldWeight in 4820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.016254688 = product of:
          0.032509375 = sum of:
            0.032509375 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.032509375 = score(doc=4820,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22

Years

Languages

  • e 192
  • d 24

Types

  • a 156
  • el 58
  • m 23
  • x 13
  • s 10
  • n 5
  • p 1
  • r 1
  • More… Less…

Subjects

Classifications