Search (545 results, page 1 of 28)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.04
    0.04107998 = sum of:
      0.034231097 = product of:
        0.20538658 = sum of:
          0.20538658 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.20538658 = score(doc=400,freq=2.0), product of:
              0.3654448 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.043105017 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.16666667 = coord(1/6)
      0.006848883 = product of:
        0.013697766 = sum of:
          0.013697766 = weight(_text_:a in 400) [ClassicSimilarity], result of:
            0.013697766 = score(doc=400,freq=26.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.27559727 = fieldWeight in 400, product of:
                5.0990195 = tf(freq=26.0), with freq of:
                  26.0 = termFreq=26.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.5 = coord(1/2)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  2. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.03
    0.03236657 = product of:
      0.06473314 = sum of:
        0.06473314 = sum of:
          0.0063317944 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
            0.0063317944 = score(doc=6089,freq=2.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.12739488 = fieldWeight in 6089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=6089)
          0.058401346 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
            0.058401346 = score(doc=6089,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.38690117 = fieldWeight in 6089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=6089)
      0.5 = coord(1/2)
    
    Pages
    S.11-22
    Type
    a
  3. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.03
    0.03236657 = product of:
      0.06473314 = sum of:
        0.06473314 = sum of:
          0.0063317944 = weight(_text_:a in 539) [ClassicSimilarity], result of:
            0.0063317944 = score(doc=539,freq=2.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.12739488 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
          0.058401346 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
            0.058401346 = score(doc=539,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.38690117 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
      0.5 = coord(1/2)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  4. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.03
    0.03236657 = product of:
      0.06473314 = sum of:
        0.06473314 = sum of:
          0.0063317944 = weight(_text_:a in 4523) [ClassicSimilarity], result of:
            0.0063317944 = score(doc=4523,freq=2.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.12739488 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
          0.058401346 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
            0.058401346 = score(doc=4523,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.38690117 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
    Type
    a
  5. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.028193437 = sum of:
      0.022820732 = product of:
        0.13692439 = sum of:
          0.13692439 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.13692439 = score(doc=701,freq=2.0), product of:
              0.3654448 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.043105017 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.16666667 = coord(1/6)
      0.005372706 = product of:
        0.010745412 = sum of:
          0.010745412 = weight(_text_:a in 701) [ClassicSimilarity], result of:
            0.010745412 = score(doc=701,freq=36.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.2161963 = fieldWeight in 701, product of:
                6.0 = tf(freq=36.0), with freq of:
                  36.0 = termFreq=36.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.5 = coord(1/2)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  6. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.03
    0.027448483 = product of:
      0.054896966 = sum of:
        0.054896966 = sum of:
          0.014016026 = weight(_text_:a in 3694) [ClassicSimilarity], result of:
            0.014016026 = score(doc=3694,freq=20.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.28200063 = fieldWeight in 3694, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3694)
          0.04088094 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
            0.04088094 = score(doc=3694,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.2708308 = fieldWeight in 3694, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3694)
      0.5 = coord(1/2)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
    Type
    a
  7. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.03
    0.026942343 = product of:
      0.053884685 = sum of:
        0.053884685 = sum of:
          0.007163608 = weight(_text_:a in 4476) [ClassicSimilarity], result of:
            0.007163608 = score(doc=4476,freq=4.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.14413087 = fieldWeight in 4476, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=4476)
          0.04672108 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
            0.04672108 = score(doc=4476,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.30952093 = fieldWeight in 4476, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4476)
      0.5 = coord(1/2)
    
    Date
    1.10.2018 14:13:22
    Type
    a
  8. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.02682531 = sum of:
      0.022820732 = product of:
        0.13692439 = sum of:
          0.13692439 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.13692439 = score(doc=5820,freq=2.0), product of:
              0.3654448 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.043105017 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.16666667 = coord(1/6)
      0.0040045786 = product of:
        0.008009157 = sum of:
          0.008009157 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
            0.008009157 = score(doc=5820,freq=20.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.16114321 = fieldWeight in 5820, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.5 = coord(1/2)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  9. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.03
    0.026677134 = product of:
      0.053354267 = sum of:
        0.053354267 = sum of:
          0.003799077 = weight(_text_:a in 3355) [ClassicSimilarity], result of:
            0.003799077 = score(doc=3355,freq=2.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.07643694 = fieldWeight in 3355, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.04955519 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.04955519 = score(doc=3355,freq=4.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.5 = coord(1/2)
    
    Content
    One of a series of three publications influenced by the travelling exhibit Places & Spaces: Mapping Science, curated by the Cyberinfrastructure for Network Science Center at Indiana University. - Additional materials can be found at http://http://scimaps.org/atlas2. Erweitert durch: Börner, Katy. Atlas of Science: Visualizing What We Know.
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  10. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.03
    0.025893256 = product of:
      0.051786512 = sum of:
        0.051786512 = sum of:
          0.0050654355 = weight(_text_:a in 3376) [ClassicSimilarity], result of:
            0.0050654355 = score(doc=3376,freq=2.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.10191591 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
          0.04672108 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
            0.04672108 = score(doc=3376,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.30952093 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
      0.5 = coord(1/2)
    
    Date
    31. 7.2010 16:58:22
    Type
    a
  11. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.03
    0.025893256 = product of:
      0.051786512 = sum of:
        0.051786512 = sum of:
          0.0050654355 = weight(_text_:a in 318) [ClassicSimilarity], result of:
            0.0050654355 = score(doc=318,freq=2.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.10191591 = fieldWeight in 318, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=318)
          0.04672108 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
            0.04672108 = score(doc=318,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.30952093 = fieldWeight in 318, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=318)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
    Type
    a
  12. Priss, U.: Faceted information representation (2000) 0.02
    0.024872728 = product of:
      0.049745455 = sum of:
        0.049745455 = sum of:
          0.008864513 = weight(_text_:a in 5095) [ClassicSimilarity], result of:
            0.008864513 = score(doc=5095,freq=8.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.17835285 = fieldWeight in 5095, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5095)
          0.04088094 = weight(_text_:22 in 5095) [ClassicSimilarity], result of:
            0.04088094 = score(doc=5095,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.2708308 = fieldWeight in 5095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5095)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents an abstract formalization of the notion of "facets". Facets are relational structures of units, relations and other facets selected for a certain purpose. Facets can be used to structure large knowledge representation systems into a hierarchical arrangement of consistent and independent subsystems (facets) that facilitate flexibility and combinations of different viewpoints or aspects. This paper describes the basic notions, facet characteristics and construction mechanisms. It then explicates the theory in an example of a faceted information retrieval system (FaIR)
    Date
    22. 1.2016 17:47:06
    Type
    a
  13. Priss, U.: Faceted knowledge representation (1999) 0.02
    0.024872728 = product of:
      0.049745455 = sum of:
        0.049745455 = sum of:
          0.008864513 = weight(_text_:a in 2654) [ClassicSimilarity], result of:
            0.008864513 = score(doc=2654,freq=8.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.17835285 = fieldWeight in 2654, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
          0.04088094 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.04088094 = score(doc=2654,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.2708308 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
      0.5 = coord(1/2)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
    Type
    a
  14. Madalli, D.P.; Balaji, B.P.; Sarangi, A.K.: Music domain analysis for building faceted ontological representation (2014) 0.02
    0.02357455 = product of:
      0.0471491 = sum of:
        0.0471491 = sum of:
          0.006268157 = weight(_text_:a in 1437) [ClassicSimilarity], result of:
            0.006268157 = score(doc=1437,freq=4.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.12611452 = fieldWeight in 1437, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1437)
          0.04088094 = weight(_text_:22 in 1437) [ClassicSimilarity], result of:
            0.04088094 = score(doc=1437,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.2708308 = fieldWeight in 1437, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1437)
      0.5 = coord(1/2)
    
    Abstract
    This paper describes to construct faceted ontologies for domain modeling. Building upon the faceted theory of S.R. Ranganathan (1967), the paper intends to address the faceted classification approach applied to build domain ontologies. As classificatory ontologies are employed to represent the relationships of entities and objects on the web, the faceted approach helps to analyze domain representation in an effective way for modeling. Based on this perspective, an ontology of the music domain has been analyzed that would serve as a case study.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  15. Priss, U.: Description logic and faceted knowledge representation (1999) 0.02
    0.023527272 = product of:
      0.047054544 = sum of:
        0.047054544 = sum of:
          0.012013736 = weight(_text_:a in 2655) [ClassicSimilarity], result of:
            0.012013736 = score(doc=2655,freq=20.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.24171482 = fieldWeight in 2655, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.035040807 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.035040807 = score(doc=2655,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.5 = coord(1/2)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
    Type
    a
  16. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie (2005) 0.02
    0.022656599 = product of:
      0.045313198 = sum of:
        0.045313198 = sum of:
          0.0044322563 = weight(_text_:a in 1852) [ClassicSimilarity], result of:
            0.0044322563 = score(doc=1852,freq=2.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.089176424 = fieldWeight in 1852, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1852)
          0.04088094 = weight(_text_:22 in 1852) [ClassicSimilarity], result of:
            0.04088094 = score(doc=1852,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.2708308 = fieldWeight in 1852, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1852)
      0.5 = coord(1/2)
    
    Date
    11. 2.2011 18:22:58
    Type
    a
  17. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.02
    0.022656599 = product of:
      0.045313198 = sum of:
        0.045313198 = sum of:
          0.0044322563 = weight(_text_:a in 4792) [ClassicSimilarity], result of:
            0.0044322563 = score(doc=4792,freq=2.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.089176424 = fieldWeight in 4792, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4792)
          0.04088094 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
            0.04088094 = score(doc=4792,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.2708308 = fieldWeight in 4792, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4792)
      0.5 = coord(1/2)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  18. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.02
    0.022645973 = sum of:
      0.020113256 = product of:
        0.12067953 = sum of:
          0.12067953 = weight(_text_:baker in 4796) [ClassicSimilarity], result of:
            0.12067953 = score(doc=4796,freq=2.0), product of:
              0.34308222 = queryWeight, product of:
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.043105017 = queryNorm
              0.35175103 = fieldWeight in 4796, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.03125 = fieldNorm(doc=4796)
        0.16666667 = coord(1/6)
      0.0025327178 = product of:
        0.0050654355 = sum of:
          0.0050654355 = weight(_text_:a in 4796) [ClassicSimilarity], result of:
            0.0050654355 = score(doc=4796,freq=8.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.10191591 = fieldWeight in 4796, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=4796)
        0.5 = coord(1/2)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  19. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.02
    0.021890612 = sum of:
      0.017599098 = product of:
        0.10559458 = sum of:
          0.10559458 = weight(_text_:baker in 3062) [ClassicSimilarity], result of:
            0.10559458 = score(doc=3062,freq=2.0), product of:
              0.34308222 = queryWeight, product of:
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.043105017 = queryNorm
              0.30778214 = fieldWeight in 3062, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3062)
        0.16666667 = coord(1/6)
      0.004291514 = product of:
        0.008583028 = sum of:
          0.008583028 = weight(_text_:a in 3062) [ClassicSimilarity], result of:
            0.008583028 = score(doc=3062,freq=30.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.17268941 = fieldWeight in 3062, product of:
                5.477226 = tf(freq=30.0), with freq of:
                  30.0 = termFreq=30.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3062)
        0.5 = coord(1/2)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  20. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.02
    0.02131948 = product of:
      0.04263896 = sum of:
        0.04263896 = sum of:
          0.007598154 = weight(_text_:a in 2418) [ClassicSimilarity], result of:
            0.007598154 = score(doc=2418,freq=8.0), product of:
              0.049702108 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.043105017 = queryNorm
              0.15287387 = fieldWeight in 2418, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.035040807 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
            0.035040807 = score(doc=2418,freq=2.0), product of:
              0.15094642 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043105017 = queryNorm
              0.23214069 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
      0.5 = coord(1/2)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
    Type
    a

Years

Languages

  • e 438
  • d 91
  • pt 5
  • el 1
  • f 1
  • sp 1
  • More… Less…

Types

  • a 419
  • el 143
  • m 23
  • x 22
  • n 13
  • s 11
  • p 5
  • r 5
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications