Search (552 results, page 1 of 28)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.04
    0.040381156 = product of:
      0.06057173 = sum of:
        0.04781587 = product of:
          0.19126348 = sum of:
            0.19126348 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.19126348 = score(doc=400,freq=2.0), product of:
                0.34031555 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.040140964 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.25 = coord(1/4)
        0.01275586 = weight(_text_:a in 400) [ClassicSimilarity], result of:
          0.01275586 = score(doc=400,freq=26.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.27559727 = fieldWeight in 400, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  2. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.04
    0.036745086 = product of:
      0.055117626 = sum of:
        0.004127479 = weight(_text_:a in 4792) [ClassicSimilarity], result of:
          0.004127479 = score(doc=4792,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.089176424 = fieldWeight in 4792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4792)
        0.050990146 = product of:
          0.07648522 = sum of:
            0.038415395 = weight(_text_:29 in 4792) [ClassicSimilarity], result of:
              0.038415395 = score(doc=4792,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.27205724 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
            0.038069822 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.038069822 = score(doc=4792,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Date
    2. 3.2013 12:29:05
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  3. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.03
    0.032472737 = product of:
      0.048709102 = sum of:
        0.00500326 = weight(_text_:a in 4649) [ClassicSimilarity], result of:
          0.00500326 = score(doc=4649,freq=4.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.10809815 = fieldWeight in 4649, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.043705843 = product of:
          0.06555876 = sum of:
            0.032927483 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.032927483 = score(doc=4649,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.032631278 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.032631278 = score(doc=4649,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.027922511 = product of:
      0.041883767 = sum of:
        0.031877246 = product of:
          0.12750898 = sum of:
            0.12750898 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12750898 = score(doc=701,freq=2.0), product of:
                0.34031555 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.040140964 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
        0.01000652 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.01000652 = score(doc=701,freq=36.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.2161963 = fieldWeight in 701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6666667 = coord(2/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.026223777 = product of:
      0.039335664 = sum of:
        0.031877246 = product of:
          0.12750898 = sum of:
            0.12750898 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12750898 = score(doc=5820,freq=2.0), product of:
                0.34031555 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.040140964 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.25 = coord(1/4)
        0.00745842 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
          0.00745842 = score(doc=5820,freq=20.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.16114321 = fieldWeight in 5820, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6666667 = coord(2/3)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  6. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.02
    0.01924434 = product of:
      0.028866509 = sum of:
        0.00817029 = weight(_text_:a in 3671) [ClassicSimilarity], result of:
          0.00817029 = score(doc=3671,freq=6.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.17652355 = fieldWeight in 3671, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3671)
        0.020696219 = product of:
          0.062088657 = sum of:
            0.062088657 = weight(_text_:29 in 3671) [ClassicSimilarity], result of:
              0.062088657 = score(doc=3671,freq=4.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.43971092 = fieldWeight in 3671, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Semantic networks produced from human data have statistical properties that cannot be easily captured by spatial representations. We explore a probabilistic approach to semantic representation that explicitly models the probability with which words occurin diffrent contexts, and hence captures the probabilistic relationships between words. We show that this representation has statistical properties consistent with the large-scale structure of semantic networks constructed by humans, and trace the origins of these properties.
    Date
    29. 6.2015 14:55:01
    29. 6.2015 16:09:05
    Type
    a
  7. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.02
    0.017161451 = product of:
      0.025742177 = sum of:
        0.013052235 = weight(_text_:a in 3694) [ClassicSimilarity], result of:
          0.013052235 = score(doc=3694,freq=20.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.28200063 = fieldWeight in 3694, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3694)
        0.012689941 = product of:
          0.038069822 = sum of:
            0.038069822 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.038069822 = score(doc=3694,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
    Type
    a
  8. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.02
    0.016788159 = product of:
      0.025182236 = sum of:
        0.010547799 = weight(_text_:a in 537) [ClassicSimilarity], result of:
          0.010547799 = score(doc=537,freq=10.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.22789092 = fieldWeight in 537, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=537)
        0.014634437 = product of:
          0.04390331 = sum of:
            0.04390331 = weight(_text_:29 in 537) [ClassicSimilarity], result of:
              0.04390331 = score(doc=537,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.31092256 = fieldWeight in 537, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=537)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    The presentation will examine the potential of facet analysis as a basis for determining status and relationships of concepts in subject based tools using a controlled vocabulary, and the extent to which it can be used as a general theory of knowledge organization as opposed to a methodology for structuring classifications only.
    Date
    26.12.2011 13:21:29
  9. Roth, G.; Schwegler, H.: Kognitive Referenz und Selbstreferentialität des Gehirns : ein Beitrag zur Klärung des Verhältnisses zwischen Erkenntnistheorie und Hirnforschung (1992) 0.02
    0.016126297 = product of:
      0.024189446 = sum of:
        0.0058963983 = weight(_text_:a in 4607) [ClassicSimilarity], result of:
          0.0058963983 = score(doc=4607,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.12739488 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4607)
        0.018293047 = product of:
          0.05487914 = sum of:
            0.05487914 = weight(_text_:29 in 4607) [ClassicSimilarity], result of:
              0.05487914 = score(doc=4607,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.38865322 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4607)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    20.12.2018 12:39:29
    Type
    a
  10. Maculan, B.C.M. dos; Lima, G.A. de; Oliveira, E.D.: Conversion methods from thesaurus to ontologies : a review (2016) 0.02
    0.016045783 = product of:
      0.024068674 = sum of:
        0.009434237 = weight(_text_:a in 4695) [ClassicSimilarity], result of:
          0.009434237 = score(doc=4695,freq=8.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.20383182 = fieldWeight in 4695, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4695)
        0.014634437 = product of:
          0.04390331 = sum of:
            0.04390331 = weight(_text_:29 in 4695) [ClassicSimilarity], result of:
              0.04390331 = score(doc=4695,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.31092256 = fieldWeight in 4695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4695)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
    Type
    a
  11. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.016016591 = product of:
      0.024024887 = sum of:
        0.0058963983 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
          0.0058963983 = score(doc=6089,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.12739488 = fieldWeight in 6089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
        0.018128488 = product of:
          0.054385465 = sum of:
            0.054385465 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.054385465 = score(doc=6089,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Pages
    S.11-22
    Type
    a
  12. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.016016591 = product of:
      0.024024887 = sum of:
        0.0058963983 = weight(_text_:a in 539) [ClassicSimilarity], result of:
          0.0058963983 = score(doc=539,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.12739488 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
        0.018128488 = product of:
          0.054385465 = sum of:
            0.054385465 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.054385465 = score(doc=539,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  13. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.016016591 = product of:
      0.024024887 = sum of:
        0.0058963983 = weight(_text_:a in 4523) [ClassicSimilarity], result of:
          0.0058963983 = score(doc=4523,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.12739488 = fieldWeight in 4523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4523)
        0.018128488 = product of:
          0.054385465 = sum of:
            0.054385465 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.054385465 = score(doc=4523,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
    Type
    a
  14. Almeida Campos, M.L. de; Espanha Gomes, H.: Ontology : several theories on the representation of knowledge domains (2017) 0.02
    0.015203151 = product of:
      0.022804726 = sum of:
        0.00817029 = weight(_text_:a in 3839) [ClassicSimilarity], result of:
          0.00817029 = score(doc=3839,freq=6.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.17652355 = fieldWeight in 3839, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3839)
        0.014634437 = product of:
          0.04390331 = sum of:
            0.04390331 = weight(_text_:29 in 3839) [ClassicSimilarity], result of:
              0.04390331 = score(doc=3839,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.31092256 = fieldWeight in 3839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3839)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies may be considered knowledge organization systems since the elements interact in a consistent conceptual structure. Theories of the representation of knowledge domains produce models that include definition, representation units, and semantic relationships that are essential for structuring such domain models. A realist viewpoint is proposed to enhance domain ontologies, as definitions provide structure that reveals not only ontological commitment but also relationships between unit representations.
    Date
    6. 5.2017 19:29:28
    Type
    a
  15. Clark, M.; Kim, Y.; Kruschwitz, U.; Song, D.; Albakour, D.; Dignum, S.; Beresi, U.C.; Fasli, M.; Roeck, A De: Automatically structuring domain knowledge from text : an overview of current research (2012) 0.02
    0.01506523 = product of:
      0.022597846 = sum of:
        0.0070756786 = weight(_text_:a in 2738) [ClassicSimilarity], result of:
          0.0070756786 = score(doc=2738,freq=8.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.15287387 = fieldWeight in 2738, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2738)
        0.015522166 = product of:
          0.046566498 = sum of:
            0.046566498 = weight(_text_:29 in 2738) [ClassicSimilarity], result of:
              0.046566498 = score(doc=2738,freq=4.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.3297832 = fieldWeight in 2738, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents an overview of automatic methods for building domain knowledge structures (domain models) from text collections. Applications of domain models have a long history within knowledge engineering and artificial intelligence. In the last couple of decades they have surfaced noticeably as a useful tool within natural language processing, information retrieval and semantic web technology. Inspired by the ubiquitous propagation of domain model structures that are emerging in several research disciplines, we give an overview of the current research landscape and some techniques and approaches. We will also discuss trade-offs between different approaches and point to some recent trends.
    Date
    29. 1.2016 18:29:51
    Type
    a
  16. Priss, U.: Description logic and faceted knowledge representation (1999) 0.01
    0.014709815 = product of:
      0.022064723 = sum of:
        0.01118763 = weight(_text_:a in 2655) [ClassicSimilarity], result of:
          0.01118763 = score(doc=2655,freq=20.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.24171482 = fieldWeight in 2655, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.010877093 = product of:
          0.032631278 = sum of:
            0.032631278 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.032631278 = score(doc=2655,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
    Type
    a
  17. Mustafa El Hadi, W.: Terminologies, ontologies and information access (2006) 0.01
    0.014203634 = product of:
      0.021305451 = sum of:
        0.006671014 = weight(_text_:a in 1488) [ClassicSimilarity], result of:
          0.006671014 = score(doc=1488,freq=4.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.14413087 = fieldWeight in 1488, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1488)
        0.014634437 = product of:
          0.04390331 = sum of:
            0.04390331 = weight(_text_:29 in 1488) [ClassicSimilarity], result of:
              0.04390331 = score(doc=1488,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.31092256 = fieldWeight in 1488, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1488)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    29. 2.2008 16:25:23
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
    Type
    a
  18. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.01
    0.014115869 = product of:
      0.021173803 = sum of:
        0.006671014 = weight(_text_:a in 4476) [ClassicSimilarity], result of:
          0.006671014 = score(doc=4476,freq=4.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.14413087 = fieldWeight in 4476, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.01450279 = product of:
          0.04350837 = sum of:
            0.04350837 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.04350837 = score(doc=4476,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    1.10.2018 14:13:22
    Type
    a
  19. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.01
    0.01404006 = product of:
      0.02106009 = sum of:
        0.008254958 = weight(_text_:a in 4644) [ClassicSimilarity], result of:
          0.008254958 = score(doc=4644,freq=8.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.17835285 = fieldWeight in 4644, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.012805132 = product of:
          0.038415395 = sum of:
            0.038415395 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.038415395 = score(doc=4644,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Date
    29. 7.2011 14:44:56
    Type
    a
  20. Sartori, F.; Grazioli, L.: Metadata guiding kowledge engineering : a practical approach (2014) 0.01
    0.013988232 = product of:
      0.020982347 = sum of:
        0.01000652 = weight(_text_:a in 1572) [ClassicSimilarity], result of:
          0.01000652 = score(doc=1572,freq=16.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.2161963 = fieldWeight in 1572, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1572)
        0.010975828 = product of:
          0.032927483 = sum of:
            0.032927483 = weight(_text_:29 in 1572) [ClassicSimilarity], result of:
              0.032927483 = score(doc=1572,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23319192 = fieldWeight in 1572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1572)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents an approach to the analysis, design and development of Knowledge Based Systems based on the Knowledge Artifact concept. Knowledge Artifacts can be meant as means to acquire, represent and maintain knowledge involved in complex problem solving activities. A complex problem is typically made of a huge number of parts that are put together according to a first set of constraints (i.e. the procedural knowledge), dependable on the functional properties it must satisfy, and a second set of rules, dependable on what the expert thinks about the problem and how he/she would represent it. The paper illustrates a way to unify both types of knowledge into a Knowledge Artifact, exploiting Ontologies, Influence Nets and Task Structures formalisms and metadata paradigm.
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
    Type
    a

Years

Languages

  • e 441
  • d 95
  • pt 5
  • el 1
  • f 1
  • sp 1
  • More… Less…

Types

  • a 419
  • el 147
  • m 26
  • x 22
  • n 13
  • s 13
  • p 5
  • r 5
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications