Search (360 results, page 1 of 18)

  • × theme_ss:"Wissensrepräsentation"
  1. Xu, Y.; Li, G.; Mou, L.; Lu, Y.: Learning non-taxonomic relations on demand for ontology extension (2014) 0.11
    0.10631579 = product of:
      0.15947369 = sum of:
        0.048011016 = weight(_text_:on in 2961) [ClassicSimilarity], result of:
          0.048011016 = score(doc=2961,freq=18.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.43740597 = fieldWeight in 2961, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2961)
        0.111462675 = product of:
          0.22292535 = sum of:
            0.22292535 = weight(_text_:demand in 2961) [ClassicSimilarity], result of:
              0.22292535 = score(doc=2961,freq=6.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.716166 = fieldWeight in 2961, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2961)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Learning non-taxonomic relations becomes an important research topic in ontology extension. Most of the existing learning approaches are mainly based on expert crafted corpora. These approaches are normally domain-specific and the corpora acquisition is laborious and costly. On the other hand, based on the static corpora, it is not able to meet personalized needs of semantic relations discovery for various taxonomies. In this paper, we propose a novel approach for learning non-taxonomic relations on demand. For any supplied taxonomy, it can focus on the segment of the taxonomy and collect information dynamically about the taxonomic concepts by using Wikipedia as a learning source. Based on the newly generated corpus, non-taxonomic relations are acquired through three steps: a) semantic relatedness detection; b) relations extraction between concepts; and c) relations generalization within a hierarchy. The proposed approach is evaluated on three different predefined taxonomies and the experimental results show that it is effective in capturing non-taxonomic relations as needed and has good potential for the ontology extension on demand.
  2. Cimiano, P.; Völker, J.; Studer, R.: Ontologies on demand? : a description of the state-of-the-art, applications, challenges and trends for ontology learning from text (2006) 0.08
    0.075761005 = product of:
      0.1136415 = sum of:
        0.02263261 = weight(_text_:on in 6014) [ClassicSimilarity], result of:
          0.02263261 = score(doc=6014,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 6014, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=6014)
        0.09100889 = product of:
          0.18201777 = sum of:
            0.18201777 = weight(_text_:demand in 6014) [ClassicSimilarity], result of:
              0.18201777 = score(doc=6014,freq=4.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.5847471 = fieldWeight in 6014, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6014)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are nowadays used for many applications requiring data, services and resources in general to be interoperable and machine understandable. Such applications are for example web service discovery and composition, information integration across databases, intelligent search, etc. The general idea is that data and services are semantically described with respect to ontologies, which are formal specifications of a domain of interest, and can thus be shared and reused in a way such that the shared meaning specified by the ontology remains formally the same across different parties and applications. As the cost of creating ontologies is relatively high, different proposals have emerged for learning ontologies from structured and unstructured resources. In this article we examine the maturity of techniques for ontology learning from textual resources, addressing the question whether the state-of-the-art is mature enough to produce ontologies 'on demand'.
  3. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.07
    0.067930594 = product of:
      0.10189588 = sum of:
        0.07926327 = product of:
          0.23778981 = sum of:
            0.23778981 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.23778981 = score(doc=400,freq=2.0), product of:
                0.42309996 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04990557 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.02263261 = weight(_text_:on in 400) [ClassicSimilarity], result of:
          0.02263261 = score(doc=400,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 400, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  4. Wong, W.; Liu, W.; Bennamoun, M.: Ontology learning from text : a look back and into the future (2010) 0.06
    0.062499635 = product of:
      0.09374945 = sum of:
        0.01867095 = weight(_text_:on in 4733) [ClassicSimilarity], result of:
          0.01867095 = score(doc=4733,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 4733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4733)
        0.0750785 = product of:
          0.150157 = sum of:
            0.150157 = weight(_text_:demand in 4733) [ClassicSimilarity], result of:
              0.150157 = score(doc=4733,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.48239172 = fieldWeight in 4733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4733)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are often viewed as the answer to the need for inter-operable semantics in modern information systems. The explosion of textual information on the "Read/Write" Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas such as natural language processing have fuelled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium, and discusses the remaining challenges that will define the research directions in this area in the near future.
  5. Noy, N.F.: Knowledge representation for intelligent information retrieval in experimental sciences (1997) 0.04
    0.044505917 = product of:
      0.06675887 = sum of:
        0.023856867 = weight(_text_:on in 694) [ClassicSimilarity], result of:
          0.023856867 = score(doc=694,freq=10.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21734878 = fieldWeight in 694, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
        0.042902 = product of:
          0.085804 = sum of:
            0.085804 = weight(_text_:demand in 694) [ClassicSimilarity], result of:
              0.085804 = score(doc=694,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2756524 = fieldWeight in 694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.03125 = fieldNorm(doc=694)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    More and more information is available on-line every day. The greater the amount of on-line information, the greater the demand for tools that process and disseminate this information. Processing electronic information in the form of text and answering users' queries about that information intelligently is one of the great challenges in natural language processing and information retrieval. The research presented in this talk is centered on the latter of these two tasks: intelligent information retrieval. In order for information to be retrieved, it first needs to be formalized in a database or knowledge base. The ontology for this formalization and assumptions it is based on are crucial to successful intelligent information retrieval. We have concentrated our effort on developing an ontology for representing knowledge in the domains of experimental sciences, molecular biology in particular. We show that existing ontological models cannot be readily applied to represent this domain adequately. For example, the fundamental notion of ontology design that every "real" object is defined as an instance of a category seems incompatible with the universe where objects can change their category as a result of experimental procedures. Another important problem is representing complex structures such as DNA, mixtures, populations of molecules, etc., that are very common in molecular biology. We present extensions that need to be made to an ontology to cover these issues: the representation of transformations that change the structure and/or category of their participants, and the component relations and spatial structures of complex objects. We demonstrate examples of how the proposed representations can be used to improve the quality and completeness of answers to user queries; discuss techniques for evaluating ontologies and show a prototype of an Information Retrieval System that we developed.
  6. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.04
    0.042826824 = product of:
      0.06424023 = sum of:
        0.021338228 = weight(_text_:on in 117) [ClassicSimilarity], result of:
          0.021338228 = score(doc=117,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 117, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
        0.042902 = product of:
          0.085804 = sum of:
            0.085804 = weight(_text_:demand in 117) [ClassicSimilarity], result of:
              0.085804 = score(doc=117,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2756524 = fieldWeight in 117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.04
    0.042340867 = product of:
      0.0635113 = sum of:
        0.05284218 = product of:
          0.15852654 = sum of:
            0.15852654 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.15852654 = score(doc=701,freq=2.0), product of:
                0.42309996 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04990557 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.010669114 = weight(_text_:on in 701) [ClassicSimilarity], result of:
          0.010669114 = score(doc=701,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6666667 = coord(2/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  8. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.04
    0.040320244 = product of:
      0.060480364 = sum of:
        0.026672786 = weight(_text_:on in 539) [ClassicSimilarity], result of:
          0.026672786 = score(doc=539,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
        0.03380758 = product of:
          0.06761516 = sum of:
            0.06761516 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.06761516 = score(doc=539,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  9. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.04
    0.038148586 = product of:
      0.057222877 = sum of:
        0.030176813 = weight(_text_:on in 4476) [ClassicSimilarity], result of:
          0.030176813 = score(doc=4476,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.27492687 = fieldWeight in 4476, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.027046064 = product of:
          0.054092128 = sum of:
            0.054092128 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.054092128 = score(doc=4476,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the types of representations used in different theories of abstractions. Shows how the type of mapping between these representations has been increasingly generalised. Discusses desirable properties preserved by such mappings and identifies how these properties are influenced by the mappings and the presentations defined. Surveys programs made in understanding the complexity reduction associated with abstraction. Focuses on formal models of how abstraction reduces the search space. Presents some of the systems that implement abstraction. shows how the efforts in this area have focused on the mechanisation of languages for the declarative representation of abstraction.
    Date
    1.10.2018 14:13:22
  10. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.04
    0.037379898 = product of:
      0.056069843 = sum of:
        0.0357853 = weight(_text_:on in 3387) [ClassicSimilarity], result of:
          0.0357853 = score(doc=3387,freq=10.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.32602316 = fieldWeight in 3387, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
              0.040569093 = score(doc=3387,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 3387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3387)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22
  11. Madalli, D.P.; Balaji, B.P.; Sarangi, A.K.: Music domain analysis for building faceted ontological representation (2014) 0.03
    0.033380013 = product of:
      0.050070018 = sum of:
        0.026404712 = weight(_text_:on in 1437) [ClassicSimilarity], result of:
          0.026404712 = score(doc=1437,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 1437, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1437)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 1437) [ClassicSimilarity], result of:
              0.04733061 = score(doc=1437,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 1437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1437)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper describes to construct faceted ontologies for domain modeling. Building upon the faceted theory of S.R. Ranganathan (1967), the paper intends to address the faceted classification approach applied to build domain ontologies. As classificatory ontologies are employed to represent the relationships of entities and objects on the web, the faceted approach helps to analyze domain representation in an effective way for modeling. Based on this perspective, an ontology of the music domain has been analyzed that would serve as a case study.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  12. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.03
    0.032120116 = product of:
      0.048180174 = sum of:
        0.016003672 = weight(_text_:on in 1169) [ClassicSimilarity], result of:
          0.016003672 = score(doc=1169,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 1169, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.032176502 = product of:
          0.064353004 = sum of:
            0.064353004 = weight(_text_:demand in 1169) [ClassicSimilarity], result of:
              0.064353004 = score(doc=1169,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2067393 = fieldWeight in 1169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1169)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    T.1: Today's thesauri are mostly electronic tools, having moved on from the paper-based era when thesaurus standards were first developed. They are built and maintained with the support of software and need to integrate with other software, such as search engines and content management systems. Whereas in the past thesauri were designed for information professionals trained in indexing and searching, today there is a demand for vocabularies that untrained users will find to be intuitive. ISO 25964 makes the transition needed for the world of electronic information management. However, part 1 retains the assumption that human intellect is usually involved in the selection of indexing terms and in the selection of search terms. If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design, even though a thesaurus built for human users may also be applied in situations where computers make the choices. Efficient exchange of data is a vital component of thesaurus management and exploitation. Hence the inclusion in this standard of recommendations for exchange formats and protocols. Adoption of these will facilitate interoperability between thesaurus management systems and the other computer applications, such as indexing and retrieval systems, that will utilize the data. Thesauri are typically used in post-coordinate retrieval systems, but may also be applied to hierarchical directories, pre-coordinate indexes and classification systems. Increasingly, thesaurus applications need to mesh with others, such as automatic categorization schemes, free-text search systems, etc. Part 2 of ISO 25964 describes additional types of structured vocabulary and gives recommendations to enable interoperation of the vocabularies at all stages of the information storage and retrieval process.
    T.2: The ability to identify and locate relevant information among vast collections and other resources is a major and pressing challenge today. Several different types of vocabulary are in use for this purpose. Some of the most widely used vocabularies were designed a hundred years ago and have been evolving steadily. A different generation of vocabularies is now emerging, designed to exploit the electronic media more effectively. A good understanding of the previous generation is still essential for effective access to collections indexed with them. An important object of ISO 25964 as a whole is to support data exchange and other forms of interoperability in circumstances in which more than one structured vocabulary is applied within one retrieval system or network. Sometimes one vocabulary has to be mapped to another, and it is important to understand both the potential and the limitations of such mappings. In other systems, a thesaurus is mapped to a classification scheme, or an ontology to a thesaurus. Comprehensive interoperability needs to cover the whole range of vocabulary types, whether young or old. Concepts in different vocabularies are related only in that they have the same or similar meaning. However, the meaning can be found in a number of different aspects within each particular type of structured vocabulary: - within terms or captions selected in different languages; - in the notation assigned indicating a place within a larger hierarchy; - in the definition, scope notes, history notes and other notes that explain the significance of that concept; and - in explicit relationships to other concepts or entities within the same vocabulary. In order to create mappings from one structured vocabulary to another it is first necessary to understand, within the context of each different type of structured vocabulary, the significance and relative importance of each of the different elements in defining the meaning of that particular concept. ISO 25964-1 describes the key characteristics of thesauri along with additional advice on best practice. ISO 25964-2 focuses on other types of vocabulary and does not attempt to cover all aspects of good practice. It concentrates on those aspects which need to be understood if one of the vocabularies is to work effectively alongside one or more of the others. Recognizing that a new standard cannot be applied to some existing vocabularies, this part of ISO 25964 provides informative description alongside the recommendations, the aim of which is to enable users and system developers to interpret and implement the existing vocabularies effectively. The remainder of ISO 25964-2 deals with the principles and practicalities of establishing mappings between vocabularies.
  13. Priss, U.: Description logic and faceted knowledge representation (1999) 0.03
    0.032002483 = product of:
      0.048003722 = sum of:
        0.027719175 = weight(_text_:on in 2655) [ClassicSimilarity], result of:
          0.027719175 = score(doc=2655,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.25253648 = fieldWeight in 2655, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.040569093 = score(doc=2655,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  14. Almeida Campos, M.L. de; Machado Campos, M.L.; Dávila, A.M.R.; Espanha Gomes, H.; Campos, L.M.; Lira e Oliveira, L. de: Information sciences methodological aspects applied to ontology reuse tools : a study based on genomic annotations in the domain of trypanosomatides (2013) 0.03
    0.02905105 = product of:
      0.043576576 = sum of:
        0.026672786 = weight(_text_:on in 635) [ClassicSimilarity], result of:
          0.026672786 = score(doc=635,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 635, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=635)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 635) [ClassicSimilarity], result of:
              0.03380758 = score(doc=635,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=635)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Despite the dissemination of modeling languages and tools for representation and construction of ontologies, their underlying methodologies can still be improved. As a consequence, ontology tools can be enhanced accordingly, in order to support users through the ontology construction process. This paper proposes suggestions for ontology tools' improvement based on a case study within the domain of bioinformatics, applying a reuse method ology. Quantitative and qualitative analyses were carried out on a subset of 28 terms of Gene Ontology on a semi-automatic alignment with other biomedical ontologies. As a result, a report is presented containing suggestions for enhancing ontology reuse tools, which is a product derived from difficulties that we had in reusing a set of OBO ontologies. For the reuse process, a set of steps closely related to those of Pinto and Martin's methodology was used. In each step, it was observed that the experiment would have been significantly improved if ontology manipulation tools had provided certain features. Accordingly, problematic aspects in ontology tools are presented and suggestions are made aiming at getting better results in ontology reuse.
    Date
    22. 2.2013 12:03:53
  15. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.03
    0.028611436 = product of:
      0.042917155 = sum of:
        0.02263261 = weight(_text_:on in 2418) [ClassicSimilarity], result of:
          0.02263261 = score(doc=2418,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 2418, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.040569093 = score(doc=2418,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  16. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.03
    0.028611436 = product of:
      0.042917155 = sum of:
        0.02263261 = weight(_text_:on in 4649) [ClassicSimilarity], result of:
          0.02263261 = score(doc=4649,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 4649, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.040569093 = score(doc=4649,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  17. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.03
    0.028611436 = product of:
      0.042917155 = sum of:
        0.02263261 = weight(_text_:on in 2230) [ClassicSimilarity], result of:
          0.02263261 = score(doc=2230,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 2230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2230)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
              0.040569093 = score(doc=2230,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
  18. Priss, U.: Faceted information representation (2000) 0.03
    0.02822417 = product of:
      0.042336255 = sum of:
        0.01867095 = weight(_text_:on in 5095) [ClassicSimilarity], result of:
          0.01867095 = score(doc=5095,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 5095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5095)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 5095) [ClassicSimilarity], result of:
              0.04733061 = score(doc=5095,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 5095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 1.2016 17:47:06
    Source
    Working with conceptual structures: contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000. Ed.: G. Stumme
  19. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.03
    0.02822417 = product of:
      0.042336255 = sum of:
        0.01867095 = weight(_text_:on in 4330) [ClassicSimilarity], result of:
          0.01867095 = score(doc=4330,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 4330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4330)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
              0.04733061 = score(doc=4330,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    12. 2.2011 17:35:22
  20. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.03
    0.026668733 = product of:
      0.0400031 = sum of:
        0.02309931 = weight(_text_:on in 633) [ClassicSimilarity], result of:
          0.02309931 = score(doc=633,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 633, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
              0.03380758 = score(doc=633,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=633)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23

Years

Languages

  • e 333
  • d 16
  • pt 4
  • f 1
  • sp 1
  • More… Less…

Types

  • a 263
  • el 105
  • m 24
  • x 15
  • s 12
  • n 8
  • p 2
  • r 2
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications