Search (117 results, page 1 of 6)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.11
    0.111583196 = sum of:
      0.08709657 = product of:
        0.26128972 = sum of:
          0.26128972 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.26128972 = score(doc=400,freq=2.0), product of:
              0.4649134 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.054837555 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.33333334 = coord(1/3)
      0.024486622 = product of:
        0.048973244 = sum of:
          0.048973244 = weight(_text_:work in 400) [ClassicSimilarity], result of:
            0.048973244 = score(doc=400,freq=2.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.2433148 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.5 = coord(1/2)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Definition of the CIDOC Conceptual Reference Model (2003) 0.06
    0.056918506 = product of:
      0.11383701 = sum of:
        0.11383701 = sum of:
          0.06925862 = weight(_text_:work in 1652) [ClassicSimilarity], result of:
            0.06925862 = score(doc=1652,freq=4.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.3440991 = fieldWeight in 1652, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
          0.04457839 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
            0.04457839 = score(doc=1652,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.23214069 = fieldWeight in 1652, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
      0.5 = coord(1/2)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
  3. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.05
    0.047432087 = product of:
      0.094864175 = sum of:
        0.094864175 = sum of:
          0.05771552 = weight(_text_:work in 3466) [ClassicSimilarity], result of:
            0.05771552 = score(doc=3466,freq=4.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.28674924 = fieldWeight in 3466, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
          0.03714866 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
            0.03714866 = score(doc=3466,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.19345059 = fieldWeight in 3466, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
      0.5 = coord(1/2)
    
    Abstract
    Specimen identification keys are still the most commonly created tools used by systematic biologists to access biodiversity information. Creating identification keys requires analyzing and synthesizing large amounts of information from specimens and their descriptions and is a very labor-intensive and time-consuming activity. Automating the generation of identification keys from text descriptions becomes a highly attractive text mining application in the biodiversity domain. Fine-grained semantic annotation of morphological descriptions of organisms is a necessary first step in generating keys from text. Machine-readable ontologies are needed in this process because most biological characters are only implied (i.e., not stated) in descriptions. The immediate question to ask is How well do existing ontologies support semantic annotation and automated key generation? With the intention to either select an existing ontology or develop a unified ontology based on existing ones, this paper evaluates the coverage, semantic consistency, and inter-ontology agreement of a biodiversity character ontology and three plant glossaries that may be turned into ontologies. The coverage and semantic consistency of the ontology/glossaries are checked against the authoritative domain literature, namely, Flora of North America and Flora of China. The evaluation results suggest that more work is needed to improve the coverage and interoperability of the ontology/glossaries. More concepts need to be added to the ontology/glossaries and careful work is needed to improve the semantic consistency. The method used in this paper to evaluate the ontology/glossaries can be used to propose new candidate concepts from the domain literature and suggest appropriate definitions.
    Date
    1. 6.2010 9:55:22
  4. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.05
    0.047432087 = product of:
      0.094864175 = sum of:
        0.094864175 = sum of:
          0.05771552 = weight(_text_:work in 2831) [ClassicSimilarity], result of:
            0.05771552 = score(doc=2831,freq=4.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.28674924 = fieldWeight in 2831, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
          0.03714866 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
            0.03714866 = score(doc=2831,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.19345059 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  5. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.05
    0.046775818 = product of:
      0.093551636 = sum of:
        0.093551636 = sum of:
          0.048973244 = weight(_text_:work in 2623) [ClassicSimilarity], result of:
            0.048973244 = score(doc=2623,freq=2.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.2433148 = fieldWeight in 2623, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=2623)
          0.04457839 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
            0.04457839 = score(doc=2623,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.23214069 = fieldWeight in 2623, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2623)
      0.5 = coord(1/2)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.05
    0.046775818 = product of:
      0.093551636 = sum of:
        0.093551636 = sum of:
          0.048973244 = weight(_text_:work in 3387) [ClassicSimilarity], result of:
            0.048973244 = score(doc=3387,freq=2.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.2433148 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
          0.04457839 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
            0.04457839 = score(doc=3387,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.23214069 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
      0.5 = coord(1/2)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22
  7. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.05
    0.046775818 = product of:
      0.093551636 = sum of:
        0.093551636 = sum of:
          0.048973244 = weight(_text_:work in 4649) [ClassicSimilarity], result of:
            0.048973244 = score(doc=4649,freq=2.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.2433148 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.04457839 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.04457839 = score(doc=4649,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.5 = coord(1/2)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  8. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.029032191 = product of:
      0.058064383 = sum of:
        0.058064383 = product of:
          0.17419314 = sum of:
            0.17419314 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.17419314 = score(doc=701,freq=2.0), product of:
                0.4649134 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.054837555 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  9. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.029032191 = product of:
      0.058064383 = sum of:
        0.058064383 = product of:
          0.17419314 = sum of:
            0.17419314 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.17419314 = score(doc=5820,freq=2.0), product of:
                0.4649134 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.054837555 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  10. Sanatjoo, A.: Development of thesaurus structure through a work-task oriented methodology 0.02
    0.022814061 = product of:
      0.045628123 = sum of:
        0.045628123 = product of:
          0.091256246 = sum of:
            0.091256246 = weight(_text_:work in 3536) [ClassicSimilarity], result of:
              0.091256246 = score(doc=3536,freq=10.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.45339036 = fieldWeight in 3536, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3536)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The development and changes in the field of digital information retrieval systems and information retrieval area, as well as technical advances require and offer possibilities for developing the functionality of thesauri. Enriching their structure require the development of thesaurus construction methodologies that exceed the potential of the traditional construction methods and adjust the thesaurus to the needs of specialized information environments. The present work extends the work-task oriented methodology (WOM) and involves an analysis of the domain of knowledge: the body of domain known facts, experts and paradigms. This empirical study investigated a mix set of methods and developed a prototype thesaurus to evaluate the potential of WOM for constructing more enriched thesaurus. The thesaurus was evaluated by a retrieval test in which the usability and performance of the thesaurus were investigated with a classic-type thesaurus (Agrovoc) with the conventional thesaurus structure. The results of study indicate that WOM is useful and provide valuable inspiration to the user, whether thesaurus compiler or information searcher. The work task oriented methodology allows the development of a thesaurus design that reflects the characteristics of the work domain.
  11. Reimer, U.: Einführung in die Wissensrepräsentation : netzartige und schema-basierte Repräsentationsformate (1991) 0.02
    0.020200431 = product of:
      0.040400863 = sum of:
        0.040400863 = product of:
          0.080801725 = sum of:
            0.080801725 = weight(_text_:work in 1566) [ClassicSimilarity], result of:
              0.080801725 = score(doc=1566,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40144894 = fieldWeight in 1566, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1566)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Classification
    ST 285 Informatik / Monographien / Software und -entwicklung / Computer supported cooperative work (CSCW), Groupware
    RVK
    ST 285 Informatik / Monographien / Software und -entwicklung / Computer supported cooperative work (CSCW), Groupware
  12. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.02
    0.020200431 = product of:
      0.040400863 = sum of:
        0.040400863 = product of:
          0.080801725 = sum of:
            0.080801725 = weight(_text_:work in 267) [ClassicSimilarity], result of:
              0.080801725 = score(doc=267,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40144894 = fieldWeight in 267, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
  13. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.01857433 = product of:
      0.03714866 = sum of:
        0.03714866 = product of:
          0.07429732 = sum of:
            0.07429732 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.07429732 = score(doc=6089,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.11-22
  14. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.01857433 = product of:
      0.03714866 = sum of:
        0.03714866 = product of:
          0.07429732 = sum of:
            0.07429732 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.07429732 = score(doc=5576,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13.12.2017 14:17:22
  15. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.01857433 = product of:
      0.03714866 = sum of:
        0.03714866 = product of:
          0.07429732 = sum of:
            0.07429732 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.07429732 = score(doc=539,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 13:22:07
  16. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.02
    0.01857433 = product of:
      0.03714866 = sum of:
        0.03714866 = product of:
          0.07429732 = sum of:
            0.07429732 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.07429732 = score(doc=3406,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 5.2010 16:22:35
  17. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.01857433 = product of:
      0.03714866 = sum of:
        0.03714866 = product of:
          0.07429732 = sum of:
            0.07429732 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.07429732 = score(doc=4523,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  18. Hodgson, J.P.E.: Knowledge representation and language in AI (1991) 0.02
    0.017671697 = product of:
      0.035343394 = sum of:
        0.035343394 = product of:
          0.07068679 = sum of:
            0.07068679 = weight(_text_:work in 1529) [ClassicSimilarity], result of:
              0.07068679 = score(doc=1529,freq=6.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.35119468 = fieldWeight in 1529, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1529)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this book is to highlight the relationship between knowledge representation and language in artificial intelligence, and in particular on the way in which the choice of representation influences the language used to discuss a problem - and vice versa. Opening with a discussion of knowledge representation methods, and following this with a look at reasoning methods, the author begins to make his case for the intimate relationship between language and representation. He shows how each representation method fits particularly well with some reasoning methods and less so with others, using specific languages as examples. The question of representation change, an important and complex issue about which very little is known, is addressed. Dr Hodgson gathers together recent work on problem solving, showing how, in some cases, it has been possible to use representation changes to recast problems into a language that makes them easier to solve. The author maintains throughout that the relationships that this book explores lie at the heart of the construction of large systems, examining a number of the current large AI systems from the viewpoint of representation and language to prove his point.
    Classification
    ST 285 Informatik / Monographien / Software und -entwicklung / Computer supported cooperative work (CSCW), Groupware
    RVK
    ST 285 Informatik / Monographien / Software und -entwicklung / Computer supported cooperative work (CSCW), Groupware
  19. Pike, W.; Gahegan, M.: Beyond ontologies : toward situated representations of scientific knowledge (2007) 0.02
    0.017671697 = product of:
      0.035343394 = sum of:
        0.035343394 = product of:
          0.07068679 = sum of:
            0.07068679 = weight(_text_:work in 2544) [ClassicSimilarity], result of:
              0.07068679 = score(doc=2544,freq=6.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.35119468 = fieldWeight in 2544, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2544)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In information systems that support knowledge-discovery applications such as scientific exploration, reliance on highly structured ontologies as data-organization aids can be limiting. With current computational aids to science work, the human knowledge that creates meaning out of analyses is often only recorded when work reaches publication-or worse, left unrecorded altogether-for lack of an ontological model for scientific concepts that can capture knowledge as it is created and used. We argue for an approach to representing scientific concepts that reflects (1) the situated processes of science work, (2) the social construction of knowledge, and (3) the emergence and evolution of understanding over time. In this model, knowledge is the result of collaboration, negotiation, and manipulation by teams of researchers. Capturing the situations in which knowledge is created and used helps these collaborators discover areas of agreement and discord, while allowing individual inquirers to maintain different perspectives on the same information. The capture of provenance information allows historical trails of reasoning to be reconstructed, allowing end users to evaluate the utility and trustworthiness of knowledge representations. We present a proof-of-concept system, called Codex, based on this situated knowledge model. Codex supports visualization of knowledge structures through concept mapping, and enables inference across those structures. The proof-of-concept is deployed in the domain of geoscience to support distributed teams of learners and researchers.
  20. Schmitz-Esser, W.; Sigel, A.: Introducing terminology-based ontologies : Papers and Materials presented by the authors at the workshop "Introducing Terminology-based Ontologies" (Poli/Schmitz-Esser/Sigel) at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006 (2006) 0.02
    0.017314656 = product of:
      0.03462931 = sum of:
        0.03462931 = product of:
          0.06925862 = sum of:
            0.06925862 = weight(_text_:work in 1285) [ClassicSimilarity], result of:
              0.06925862 = score(doc=1285,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.3440991 = fieldWeight in 1285, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1285)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This work-in-progress communication contains the papers and materials presented by Winfried Schmitz-Esser and Alexander Sigel in the joint workshop (with Roberto Poli) "Introducing Terminology-based Ontologies" at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006.
    Content
    Inhalt: 1. From traditional Knowledge Organization Systems (authority files, classifications, thesauri) towards ontologies on the web (Alexander Sigel) (Tutorial. Paper with Slides interspersed) pp. 3-53 2. Introduction to Integrative Cross-Language Ontology (ICLO): Formalizing and interrelating textual knowledge to enable intelligent action and knowledge sharing (Winfried Schmitz-Esser) pp. 54-113 3. First Idea Sketch on Modelling ICLO with Topic Maps (Alexander Sigel) (Work in progress paper. Topic maps available from the author) pp. 114-130

Authors

Years

Languages

  • e 102
  • d 12
  • f 1
  • More… Less…

Types

  • a 82
  • el 30
  • x 9
  • m 7
  • n 2
  • p 1
  • r 1
  • s 1
  • More… Less…