Search (546 results, page 1 of 28)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.11
    0.113271415 = product of:
      0.26429996 = sum of:
        0.062927395 = product of:
          0.18878219 = sum of:
            0.18878219 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18878219 = score(doc=400,freq=2.0), product of:
                0.33590057 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962021 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18878219 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18878219 = score(doc=400,freq=2.0), product of:
            0.33590057 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962021 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.012590375 = weight(_text_:a in 400) [ClassicSimilarity], result of:
          0.012590375 = score(doc=400,freq=26.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.27559727 = fieldWeight in 400, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.42857143 = coord(3/7)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.10
    0.09741378 = product of:
      0.22729881 = sum of:
        0.041951597 = product of:
          0.12585479 = sum of:
            0.12585479 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12585479 = score(doc=5820,freq=2.0), product of:
                0.33590057 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962021 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17798555 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17798555 = score(doc=5820,freq=4.0), product of:
            0.33590057 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962021 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0073616602 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
          0.0073616602 = score(doc=5820,freq=20.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.16114321 = fieldWeight in 5820, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.42857143 = coord(3/7)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.08
    0.076149896 = product of:
      0.17768309 = sum of:
        0.041951597 = product of:
          0.12585479 = sum of:
            0.12585479 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12585479 = score(doc=701,freq=2.0), product of:
                0.33590057 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962021 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12585479 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12585479 = score(doc=701,freq=2.0), product of:
            0.33590057 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962021 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.009876703 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.009876703 = score(doc=701,freq=36.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.2161963 = fieldWeight in 701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.42857143 = coord(3/7)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Qin, J.; Creticos, P.; Hsiao, W.Y.: Adaptive modeling of workforce domain knowledge (2006) 0.04
    0.038621176 = product of:
      0.13517411 = sum of:
        0.008553476 = weight(_text_:a in 2519) [ClassicSimilarity], result of:
          0.008553476 = score(doc=2519,freq=12.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.18723148 = fieldWeight in 2519, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2519)
        0.12662064 = weight(_text_:287 in 2519) [ClassicSimilarity], result of:
          0.12662064 = score(doc=2519,freq=2.0), product of:
            0.27509487 = queryWeight, product of:
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.03962021 = queryNorm
            0.46027988 = fieldWeight in 2519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.046875 = fieldNorm(doc=2519)
      0.2857143 = coord(2/7)
    
    Abstract
    Workforce development is a multidisciplinary domain in which policy, laws and regulations, social services, training and education, and information technology and systems are heavily involved. It is essential to have a semantic base accepted by the workforce development community for knowledge sharing and exchange. This paper describes how such a semantic base-the Workforce Open Knowledge Exchange (WOKE) Ontology-was built by using the adaptive modeling approach. The focus of this paper is to address questions such as how ontology designers should extract and model concepts obtained from different sources and what methodologies are useful along the steps of ontology development. The paper proposes a methodology framework "adaptive modeling" and explains the methodology through examples and some lessons learned from the process of developing the WOKE ontology.
    Pages
    S.287-293
    Source
    Knowledge organization for a global learning society: Proceedings of the 9th International ISKO Conference, 4-7 July 2006, Vienna, Austria. Hrsg.: G. Budin, C. Swertz u. K. Mitgutsch
    Type
    a
  5. Saruladha, K.; Aghila, G.; Penchala, S.K.: Design of new indexing techniques based on ontology for information retrieval systems (2010) 0.03
    0.031810604 = product of:
      0.1113371 = sum of:
        0.005819903 = weight(_text_:a in 4317) [ClassicSimilarity], result of:
          0.005819903 = score(doc=4317,freq=8.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.12739488 = fieldWeight in 4317, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4317)
        0.1055172 = weight(_text_:287 in 4317) [ClassicSimilarity], result of:
          0.1055172 = score(doc=4317,freq=2.0), product of:
            0.27509487 = queryWeight, product of:
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.03962021 = queryNorm
            0.3835666 = fieldWeight in 4317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4317)
      0.2857143 = coord(2/7)
    
    Abstract
    Information Retrieval [IR] is the science of searching for documents, for information within documents, and for metadata about documents, as well as that of searching relational databases and the World Wide Web. This paper describes a document representation method instead of keywords ontological descriptors. The purpose of this paper is to propose a system for content-based querying of texts based on the availability of ontology for the concepts in the text domain and to develop new Indexing methods to improve RSV (Retrieval status value). There is a need for querying ontologies at various granularities to retrieve information from various sources to suit the requirements of Semantic web, to eradicate the mismatch between user request and response from the Information Retrieval system. Most of the search engines use indexes that are built at the syntactical level and return hits based on simple string comparisons. The indexes do not contain synonyms, cannot differentiate between homonyms and users receive different search results when they use different conjugation forms of the same word.
    Pages
    S.287-291
    Type
    a
  6. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.01
    0.014864746 = product of:
      0.104053214 = sum of:
        0.104053214 = weight(_text_:bib in 4333) [ClassicSimilarity], result of:
          0.104053214 = score(doc=4333,freq=2.0), product of:
            0.24937792 = queryWeight, product of:
              6.29421 = idf(docFreq=221, maxDocs=44218)
              0.03962021 = queryNorm
            0.4172511 = fieldWeight in 4333, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.29421 = idf(docFreq=221, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
      0.14285715 = coord(1/7)
    
    Source
    http://www.is.informatik.uni-duisburg.de/bib/pdf/ir/Huesken:06.pdf
  7. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.009331388 = product of:
      0.03265986 = sum of:
        0.005819903 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
          0.005819903 = score(doc=6089,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.12739488 = fieldWeight in 6089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
        0.026839957 = product of:
          0.053679913 = sum of:
            0.053679913 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.053679913 = score(doc=6089,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Pages
    S.11-22
    Type
    a
  8. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.01
    0.009331388 = product of:
      0.03265986 = sum of:
        0.005819903 = weight(_text_:a in 539) [ClassicSimilarity], result of:
          0.005819903 = score(doc=539,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.12739488 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
        0.026839957 = product of:
          0.053679913 = sum of:
            0.053679913 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.053679913 = score(doc=539,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  9. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.01
    0.009331388 = product of:
      0.03265986 = sum of:
        0.005819903 = weight(_text_:a in 4523) [ClassicSimilarity], result of:
          0.005819903 = score(doc=4523,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.12739488 = fieldWeight in 4523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4523)
        0.026839957 = product of:
          0.053679913 = sum of:
            0.053679913 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.053679913 = score(doc=4523,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
    Type
    a
  10. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.01
    0.009048822 = product of:
      0.031670876 = sum of:
        0.012882906 = weight(_text_:a in 3694) [ClassicSimilarity], result of:
          0.012882906 = score(doc=3694,freq=20.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.28200063 = fieldWeight in 3694, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3694)
        0.018787969 = product of:
          0.037575938 = sum of:
            0.037575938 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.037575938 = score(doc=3694,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
    Type
    a
  11. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.01
    0.008016124 = product of:
      0.028056435 = sum of:
        0.006584469 = weight(_text_:a in 4476) [ClassicSimilarity], result of:
          0.006584469 = score(doc=4476,freq=4.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.14413087 = fieldWeight in 4476, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.021471966 = product of:
          0.042943932 = sum of:
            0.042943932 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.042943932 = score(doc=4476,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    1.10.2018 14:13:22
    Type
    a
  12. Priss, U.: Description logic and faceted knowledge representation (1999) 0.01
    0.007756133 = product of:
      0.027146464 = sum of:
        0.011042491 = weight(_text_:a in 2655) [ClassicSimilarity], result of:
          0.011042491 = score(doc=2655,freq=20.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.24171482 = fieldWeight in 2655, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.016103974 = product of:
          0.032207947 = sum of:
            0.032207947 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.032207947 = score(doc=2655,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
    Type
    a
  13. Priss, U.: Faceted information representation (2000) 0.01
    0.007695953 = product of:
      0.026935834 = sum of:
        0.008147865 = weight(_text_:a in 5095) [ClassicSimilarity], result of:
          0.008147865 = score(doc=5095,freq=8.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 5095, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5095)
        0.018787969 = product of:
          0.037575938 = sum of:
            0.037575938 = weight(_text_:22 in 5095) [ClassicSimilarity], result of:
              0.037575938 = score(doc=5095,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.2708308 = fieldWeight in 5095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper presents an abstract formalization of the notion of "facets". Facets are relational structures of units, relations and other facets selected for a certain purpose. Facets can be used to structure large knowledge representation systems into a hierarchical arrangement of consistent and independent subsystems (facets) that facilitate flexibility and combinations of different viewpoints or aspects. This paper describes the basic notions, facet characteristics and construction mechanisms. It then explicates the theory in an example of a faceted information retrieval system (FaIR)
    Date
    22. 1.2016 17:47:06
    Type
    a
  14. Priss, U.: Faceted knowledge representation (1999) 0.01
    0.007695953 = product of:
      0.026935834 = sum of:
        0.008147865 = weight(_text_:a in 2654) [ClassicSimilarity], result of:
          0.008147865 = score(doc=2654,freq=8.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 2654, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.018787969 = product of:
          0.037575938 = sum of:
            0.037575938 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.037575938 = score(doc=2654,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
    Type
    a
  15. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.0075046867 = product of:
      0.026266402 = sum of:
        0.003491942 = weight(_text_:a in 3355) [ClassicSimilarity], result of:
          0.003491942 = score(doc=3355,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.07643694 = fieldWeight in 3355, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.02277446 = product of:
          0.04554892 = sum of:
            0.04554892 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.04554892 = score(doc=3355,freq=4.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    One of a series of three publications influenced by the travelling exhibit Places & Spaces: Mapping Science, curated by the Cyberinfrastructure for Network Science Center at Indiana University. - Additional materials can be found at http://http://scimaps.org/atlas2. Erweitert durch: Börner, Katy. Atlas of Science: Visualizing What We Know.
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  16. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.0074651116 = product of:
      0.02612789 = sum of:
        0.0046559228 = weight(_text_:a in 3376) [ClassicSimilarity], result of:
          0.0046559228 = score(doc=3376,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.10191591 = fieldWeight in 3376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3376)
        0.021471966 = product of:
          0.042943932 = sum of:
            0.042943932 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.042943932 = score(doc=3376,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    31. 7.2010 16:58:22
    Type
    a
  17. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.01
    0.0074651116 = product of:
      0.02612789 = sum of:
        0.0046559228 = weight(_text_:a in 318) [ClassicSimilarity], result of:
          0.0046559228 = score(doc=318,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.10191591 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.021471966 = product of:
          0.042943932 = sum of:
            0.042943932 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.042943932 = score(doc=318,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    22. 5.2021 12:43:05
    Type
    a
  18. Madalli, D.P.; Balaji, B.P.; Sarangi, A.K.: Music domain analysis for building faceted ontological representation (2014) 0.01
    0.007014109 = product of:
      0.02454938 = sum of:
        0.0057614106 = weight(_text_:a in 1437) [ClassicSimilarity], result of:
          0.0057614106 = score(doc=1437,freq=4.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.12611452 = fieldWeight in 1437, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1437)
        0.018787969 = product of:
          0.037575938 = sum of:
            0.037575938 = weight(_text_:22 in 1437) [ClassicSimilarity], result of:
              0.037575938 = score(doc=1437,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.2708308 = fieldWeight in 1437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1437)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper describes to construct faceted ontologies for domain modeling. Building upon the faceted theory of S.R. Ranganathan (1967), the paper intends to address the faceted classification approach applied to build domain ontologies. As classificatory ontologies are employed to represent the relationships of entities and objects on the web, the faceted approach helps to analyze domain representation in an effective way for modeling. Based on this perspective, an ontology of the music domain has been analyzed that would serve as a case study.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  19. Mahesh, K.: Highly expressive tagging for knowledge organization in the 21st century (2014) 0.01
    0.0068319887 = product of:
      0.023911959 = sum of:
        0.01049198 = weight(_text_:a in 1434) [ClassicSimilarity], result of:
          0.01049198 = score(doc=1434,freq=26.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.22966442 = fieldWeight in 1434, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1434)
        0.013419978 = product of:
          0.026839957 = sum of:
            0.026839957 = weight(_text_:22 in 1434) [ClassicSimilarity], result of:
              0.026839957 = score(doc=1434,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.19345059 = fieldWeight in 1434, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1434)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Knowledge organization of large-scale content on the Web requires substantial amounts of semantic metadata that is expensive to generate manually. Recent developments in Web technologies have enabled any user to tag documents and other forms of content thereby generating metadata that could help organize knowledge. However, merely adding one or more tags to a document is highly inadequate to capture the aboutness of the document and thereby to support powerful semantic functions such as automatic classification, question answering or true semantic search and retrieval. This is true even when the tags used are labels from a well-designed classification system such as a thesaurus or taxonomy. There is a strong need to develop a semantic tagging mechanism with sufficient expressive power to capture the aboutness of each part of a document or dataset or multimedia content in order to enable applications that can benefit from knowledge organization on the Web. This article proposes a highly expressive mechanism of using ontology snippets as semantic tags that map portions of a document or a part of a dataset or a segment of a multimedia content to concepts and relations in an ontology of the domain(s) of interest.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  20. Almeida Campos, M.L. de; Machado Campos, M.L.; Dávila, A.M.R.; Espanha Gomes, H.; Campos, L.M.; Lira e Oliveira, L. de: Information sciences methodological aspects applied to ontology reuse tools : a study based on genomic annotations in the domain of trypanosomatides (2013) 0.01
    0.0067143855 = product of:
      0.023500348 = sum of:
        0.010080369 = weight(_text_:a in 635) [ClassicSimilarity], result of:
          0.010080369 = score(doc=635,freq=24.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.22065444 = fieldWeight in 635, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=635)
        0.013419978 = product of:
          0.026839957 = sum of:
            0.026839957 = weight(_text_:22 in 635) [ClassicSimilarity], result of:
              0.026839957 = score(doc=635,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.19345059 = fieldWeight in 635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=635)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Despite the dissemination of modeling languages and tools for representation and construction of ontologies, their underlying methodologies can still be improved. As a consequence, ontology tools can be enhanced accordingly, in order to support users through the ontology construction process. This paper proposes suggestions for ontology tools' improvement based on a case study within the domain of bioinformatics, applying a reuse method ology. Quantitative and qualitative analyses were carried out on a subset of 28 terms of Gene Ontology on a semi-automatic alignment with other biomedical ontologies. As a result, a report is presented containing suggestions for enhancing ontology reuse tools, which is a product derived from difficulties that we had in reusing a set of OBO ontologies. For the reuse process, a set of steps closely related to those of Pinto and Martin's methodology was used. In each step, it was observed that the experiment would have been significantly improved if ontology manipulation tools had provided certain features. Accordingly, problematic aspects in ontology tools are presented and suggestions are made aiming at getting better results in ontology reuse.
    Date
    22. 2.2013 12:03:53
    Type
    a

Years

Languages

  • e 438
  • d 92
  • pt 5
  • el 1
  • f 1
  • sp 1
  • More… Less…

Types

  • a 419
  • el 144
  • m 23
  • x 23
  • n 13
  • s 11
  • p 5
  • r 5
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications