Search (69 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.11
    0.10943339 = product of:
      0.27358347 = sum of:
        0.06839587 = product of:
          0.20518759 = sum of:
            0.20518759 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.20518759 = score(doc=400,freq=2.0), product of:
                0.36509076 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043063257 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.20518759 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.20518759 = score(doc=400,freq=2.0), product of:
            0.36509076 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043063257 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.4 = coord(2/5)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.10
    0.09561999 = product of:
      0.23904997 = sum of:
        0.045597248 = product of:
          0.13679174 = sum of:
            0.13679174 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.13679174 = score(doc=5820,freq=2.0), product of:
                0.36509076 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043063257 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.19345272 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.19345272 = score(doc=5820,freq=4.0), product of:
            0.36509076 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043063257 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.4 = coord(2/5)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.07
    0.0729556 = product of:
      0.18238899 = sum of:
        0.045597248 = product of:
          0.13679174 = sum of:
            0.13679174 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13679174 = score(doc=701,freq=2.0), product of:
                0.36509076 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043063257 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.13679174 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13679174 = score(doc=701,freq=2.0), product of:
            0.36509076 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043063257 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Si, L.; Zhou, J.: Ontology and linked data of Chinese great sites information resources from users' perspective (2022) 0.05
    0.0500317 = product of:
      0.2501585 = sum of:
        0.2501585 = weight(_text_:great in 1115) [ClassicSimilarity], result of:
          0.2501585 = score(doc=1115,freq=22.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            1.0316678 = fieldWeight in 1115, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
      0.2 = coord(1/5)
    
    Abstract
    Great Sites are closely related to the residents' life, urban and rural development. In the process of rapid urbanization in China, the protection and utilization of Great Sites are facing unprecedented pressure. Effective knowl­edge organization with ontology and linked data of Great Sites is a prerequisite for their protection and utilization. In this paper, an interview is conducted to understand the users' awareness towards Great Sites to build the user-centered ontology. As for designing the Great Site ontology, firstly, the scope of Great Sites is determined. Secondly, CIDOC- CRM and OWL-Time Ontology are reused combining the results of literature research and user interviews. Thirdly, the top-level structure and the specific instances are determined to extract knowl­edge concepts of Great Sites. Fourthly, they are transformed into classes, data properties and object properties of the Great Site ontology. Later, based on the linked data technology, taking the Great Sites in Xi'an Area as an example, this paper uses D2RQ to publish the linked data set of the knowl­edge of the Great Sites and realize its opening and sharing. Semantic services such as semantic annotation, semantic retrieval and reasoning are provided based on the ontology.
  5. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.03
    0.02880378 = product of:
      0.07200945 = sum of:
        0.060340498 = weight(_text_:great in 4399) [ClassicSimilarity], result of:
          0.060340498 = score(doc=4399,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.24884763 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.011668953 = product of:
          0.023337906 = sum of:
            0.023337906 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.023337906 = score(doc=4399,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Date
    20. 1.2015 18:30:22
  6. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.03
    0.02880378 = product of:
      0.07200945 = sum of:
        0.060340498 = weight(_text_:great in 179) [ClassicSimilarity], result of:
          0.060340498 = score(doc=179,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.24884763 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.011668953 = product of:
          0.023337906 = sum of:
            0.023337906 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.023337906 = score(doc=179,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  7. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.02
    0.021119175 = product of:
      0.10559587 = sum of:
        0.10559587 = weight(_text_:great in 3061) [ClassicSimilarity], result of:
          0.10559587 = score(doc=3061,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.43548337 = fieldWeight in 3061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
      0.2 = coord(1/5)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  8. Biagetti, M.T.: Ontologies as knowledge organization systems (2021) 0.02
    0.021119175 = product of:
      0.10559587 = sum of:
        0.10559587 = weight(_text_:great in 439) [ClassicSimilarity], result of:
          0.10559587 = score(doc=439,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.43548337 = fieldWeight in 439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
      0.2 = coord(1/5)
    
    Abstract
    This contribution presents the principal features of ontologies, drawing special attention to the comparison between ontologies and the different kinds of know­ledge organization systems (KOS). The focus is on the semantic richness exhibited by ontologies, which allows the creation of a great number of relationships between terms. That establishes ontologies as the most evolved type of KOS. The concepts of "conceptualization" and "formalization" and the key components of ontologies are described and discussed, along with upper and domain ontologies and special typologies, such as bibliographical ontologies and biomedical ontologies. The use of ontologies in the digital libraries environment, where they have replaced thesauri for query expansion in searching, and the role they are playing in the Semantic Web, especially for semantic interoperability, are sketched.
  9. Wei, W.; Liu, Y.-P.; Wei, L-R.: Feature-level sentiment analysis based on rules and fine-grained domain ontology (2020) 0.02
    0.01810215 = product of:
      0.09051075 = sum of:
        0.09051075 = weight(_text_:great in 5876) [ClassicSimilarity], result of:
          0.09051075 = score(doc=5876,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.37327147 = fieldWeight in 5876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
      0.2 = coord(1/5)
    
    Abstract
    Mining product reviews and sentiment analysis are of great significance, whether for academic research purposes or optimizing business strategies. We propose a feature-level sentiment analysis framework based on rules parsing and fine-grained domain ontology for Chinese reviews. Fine-grained ontology is used to describe synonymous expressions of product features, which are reflected in word changes in online reviews. First, a semiautomatic construction method is developed by using Word2Vec for fine-grained ontology. Then, featurelevel sentiment analysis that combines rules parsing and the fine-grained domain ontology is conducted to extract explicit and implicit features from product reviews. Finally, the domain sentiment dictionary and context sentiment dictionary are established to identify sentiment polarities for the extracted feature-sentiment combinations. An experiment is conducted on the basis of product reviews crawled from Chinese e-commerce websites. The results demonstrate the effectiveness of our approach.
  10. Soshnikov, D.: ROMEO: an ontology-based multi-agent architecture for online information retrieval (2021) 0.02
    0.016896758 = product of:
      0.08448379 = sum of:
        0.08448379 = weight(_text_:education in 249) [ClassicSimilarity], result of:
          0.08448379 = score(doc=249,freq=2.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.4164192 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
      0.2 = coord(1/5)
    
    Abstract
    This paper describes an approach to path-finding in the intelligent graphs, with vertices being intelligent agents. A possible implementation of this approach is described, based on logical inference in distributed frame hierarchy. Presented approach can be used for implementing distributed intelligent information systems that include automatic navigation and path generation in hypertext, which can be used, for example in distance education, as well as for organizing intelligent web catalogues with flexible ontology-based information retrieval.
  11. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.02
    0.015085125 = product of:
      0.075425625 = sum of:
        0.075425625 = weight(_text_:great in 2861) [ClassicSimilarity], result of:
          0.075425625 = score(doc=2861,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.31105953 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.2 = coord(1/5)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  12. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.01
    0.014784663 = product of:
      0.07392331 = sum of:
        0.07392331 = weight(_text_:education in 267) [ClassicSimilarity], result of:
          0.07392331 = score(doc=267,freq=2.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.3643668 = fieldWeight in 267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
      0.2 = coord(1/5)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
  13. Qin, J.; Creticos, P.; Hsiao, W.Y.: Adaptive modeling of workforce domain knowledge (2006) 0.01
    0.012672568 = product of:
      0.06336284 = sum of:
        0.06336284 = weight(_text_:education in 2519) [ClassicSimilarity], result of:
          0.06336284 = score(doc=2519,freq=2.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.3123144 = fieldWeight in 2519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.046875 = fieldNorm(doc=2519)
      0.2 = coord(1/5)
    
    Abstract
    Workforce development is a multidisciplinary domain in which policy, laws and regulations, social services, training and education, and information technology and systems are heavily involved. It is essential to have a semantic base accepted by the workforce development community for knowledge sharing and exchange. This paper describes how such a semantic base-the Workforce Open Knowledge Exchange (WOKE) Ontology-was built by using the adaptive modeling approach. The focus of this paper is to address questions such as how ontology designers should extract and model concepts obtained from different sources and what methodologies are useful along the steps of ontology development. The paper proposes a methodology framework "adaptive modeling" and explains the methodology through examples and some lessons learned from the process of developing the WOKE ontology.
  14. Bold, N.; Kim, W.-J.; Yang, J.-D.: Converting object-based thesauri into XML Topic Maps (2010) 0.01
    0.012672568 = product of:
      0.06336284 = sum of:
        0.06336284 = weight(_text_:education in 4799) [ClassicSimilarity], result of:
          0.06336284 = score(doc=4799,freq=2.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.3123144 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.046875 = fieldNorm(doc=4799)
      0.2 = coord(1/5)
    
    Source
    2010 2nd International Conference on Education Technology and Computer (ICETC)
  15. Noy, N.F.: Knowledge representation for intelligent information retrieval in experimental sciences (1997) 0.01
    0.012068099 = product of:
      0.060340498 = sum of:
        0.060340498 = weight(_text_:great in 694) [ClassicSimilarity], result of:
          0.060340498 = score(doc=694,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.24884763 = fieldWeight in 694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
      0.2 = coord(1/5)
    
    Abstract
    More and more information is available on-line every day. The greater the amount of on-line information, the greater the demand for tools that process and disseminate this information. Processing electronic information in the form of text and answering users' queries about that information intelligently is one of the great challenges in natural language processing and information retrieval. The research presented in this talk is centered on the latter of these two tasks: intelligent information retrieval. In order for information to be retrieved, it first needs to be formalized in a database or knowledge base. The ontology for this formalization and assumptions it is based on are crucial to successful intelligent information retrieval. We have concentrated our effort on developing an ontology for representing knowledge in the domains of experimental sciences, molecular biology in particular. We show that existing ontological models cannot be readily applied to represent this domain adequately. For example, the fundamental notion of ontology design that every "real" object is defined as an instance of a category seems incompatible with the universe where objects can change their category as a result of experimental procedures. Another important problem is representing complex structures such as DNA, mixtures, populations of molecules, etc., that are very common in molecular biology. We present extensions that need to be made to an ontology to cover these issues: the representation of transformations that change the structure and/or category of their participants, and the component relations and spatial structures of complex objects. We demonstrate examples of how the proposed representations can be used to improve the quality and completeness of answers to user queries; discuss techniques for evaluating ontologies and show a prototype of an Information Retrieval System that we developed.
  16. Lim, S.C.J.; Liu, Y.; Lee, W.B.: ¬A methodology for building a semantically annotated multi-faceted ontology for product family modelling (2011) 0.01
    0.012068099 = product of:
      0.060340498 = sum of:
        0.060340498 = weight(_text_:great in 1485) [ClassicSimilarity], result of:
          0.060340498 = score(doc=1485,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.24884763 = fieldWeight in 1485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1485)
      0.2 = coord(1/5)
    
    Abstract
    Product family design is one of the prevailing approaches in realizing mass customization. With the increasing number of product offerings targeted at different market segments, the issue of information management in product family design, that is related to an efficient and effective storage, sharing and timely retrieval of design information, has become more complicated and challenging. Product family modelling schema reported in the literature generally stress the component aspects of a product family and its analysis, with a limited capability to model complex inter-relations between physical components and other required information in different semantic orientations, such as manufacturing, material and marketing wise. To tackle this problem, ontology-based representation has been identified as a promising solution to redesign product platforms especially in a semantically rich environment. However, ontology development in design engineering demands a great deal of time commitment and human effort to process complex information. When a large variety of products are available, particularly in the consumer market, a more efficient method for building a product family ontology with the incorporation of multi-faceted semantic information is therefore highly desirable. In this study, we propose a methodology for building a semantically annotated multi-faceted ontology for product family modelling that is able to automatically suggest semantically-related annotations based on the design and manufacturing repository. The six steps of building such ontology: formation of product family taxonomy; extraction of entities; faceted unit generation and concept identification; facet modelling and semantic annotation; formation of a semantically annotated multi-faceted product family ontology (MFPFO); and ontology validation and evaluation are discussed in detail. Using a family of laptop computers as an illustrative example, we demonstrate how our methodology can be deployed step by step to create a semantically annotated MFPFO. Finally, we briefly discuss future research issues as well as interesting applications that can be further pursued based on the MFPFO developed.
  17. Cao, N.; Sun, J.; Lin, Y.-R.; Gotz, D.; Liu, S.; Qu, H.: FacetAtlas : Multifaceted visualization for rich text corpora (2010) 0.01
    0.010560473 = product of:
      0.052802365 = sum of:
        0.052802365 = weight(_text_:education in 3366) [ClassicSimilarity], result of:
          0.052802365 = score(doc=3366,freq=2.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.260262 = fieldWeight in 3366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3366)
      0.2 = coord(1/5)
    
    Abstract
    Documents in rich text corpora usually contain multiple facets of information. For example, an article about a specific disease often consists of different facets such as symptom, treatment, cause, diagnosis, prognosis, and prevention. Thus, documents may have different relations based on different facets. Powerful search tools have been developed to help users locate lists of individual documents that are most related to specific keywords. However, there is a lack of effective analysis tools that reveal the multifaceted relations of documents within or cross the document clusters. In this paper, we present FacetAtlas, a multifaceted visualization technique for visually analyzing rich text corpora. FacetAtlas combines search technology with advanced visual analytical tools to convey both global and local patterns simultaneously. We describe several unique aspects of FacetAtlas, including (1) node cliques and multifaceted edges, (2) an optimized density map, and (3) automated opacity pattern enhancement for highlighting visual patterns, (4) interactive context switch between facets. In addition, we demonstrate the power of FacetAtlas through a case study that targets patient education in the health care domain. Our evaluation shows the benefits of this work, especially in support of complex multifaceted data analysis.
  18. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.01
    0.010559588 = product of:
      0.052797936 = sum of:
        0.052797936 = weight(_text_:great in 2362) [ClassicSimilarity], result of:
          0.052797936 = score(doc=2362,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.21774168 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.2 = coord(1/5)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
  19. Reimer, U.; Brockhausen, P.; Lau, T.; Reich, J.R.: Ontology-based knowledge management at work : the Swiss life case studies (2004) 0.01
    0.008448379 = product of:
      0.042241894 = sum of:
        0.042241894 = weight(_text_:education in 4411) [ClassicSimilarity], result of:
          0.042241894 = score(doc=4411,freq=2.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.2082096 = fieldWeight in 4411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.03125 = fieldNorm(doc=4411)
      0.2 = coord(1/5)
    
    Abstract
    This chapter describes two case studies conducted by the Swiss Life insurance group with the objective of proving the practical applicability and superiority of ontology-based knowledge management over classical approaches based on text retrieval technologies. The first case study in the domain of skills management uses manually constructed ontologies about skills, job functions and education. The purpose of the system is to give support for finding employees with certain skills. The ontologies are used to ensure that the user description of skills and the machine-held index of skills and people use the same vocabulary. The use of a shared vocabulary increases the performance of such a system significantly. The second case study aims at improving content-oriented access to passages of a 1000 page document about the International Accounting Standard on the corporate intranet. To this end, an ontology was automatically extracted from the document. It can be used to reformulate queries that turned out not to deliver the intended results. Since the ontology was automatically built, it is of a rather simple structure, consisting of weighted semantic associations between the relevant concepts in the document. We therefore call it a 'lightweight ontology'. The two case studies cover quite different aspects of using ontologies in knowledge management applications. Whereas in the second case study an ontology was automatically derived from a search space to improve information retrieval, in the first skills management case study the ontology itself introduces a structured search space. In one case study we gathered experience in building an ontology manually, while the challenge of the other case study was automatic ontology creation. A number of the novel Semantic Web-based tools described elsewhere in this book were used to build the two systems and both case studies described have led to projects to deploy live systems within Swiss Life.
  20. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.01
    0.0060340497 = product of:
      0.030170249 = sum of:
        0.030170249 = weight(_text_:great in 4472) [ClassicSimilarity], result of:
          0.030170249 = score(doc=4472,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.12442382 = fieldWeight in 4472, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
      0.2 = coord(1/5)
    
    Abstract
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.

Years

Languages

  • e 57
  • d 11
  • f 1
  • More… Less…

Types

  • a 50
  • el 19
  • x 7
  • m 2
  • A 1
  • EL 1
  • n 1
  • r 1
  • More… Less…