Search (252 results, page 1 of 13)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.26
    0.26156124 = product of:
      0.45773214 = sum of:
        0.062032532 = product of:
          0.18609759 = sum of:
            0.18609759 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18609759 = score(doc=400,freq=2.0), product of:
                0.3311239 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03905679 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18609759 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18609759 = score(doc=400,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.023504408 = weight(_text_:based in 400) [ClassicSimilarity], result of:
          0.023504408 = score(doc=400,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.19973516 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.18609759 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18609759 = score(doc=400,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.24
    0.24417277 = product of:
      0.42730233 = sum of:
        0.041355025 = product of:
          0.12406507 = sum of:
            0.12406507 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12406507 = score(doc=5820,freq=2.0), product of:
                0.3311239 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03905679 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.1754545 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.1754545 = score(doc=5820,freq=4.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.035038304 = weight(_text_:based in 5820) [ClassicSimilarity], result of:
          0.035038304 = score(doc=5820,freq=10.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.2977476 = fieldWeight in 5820, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.1754545 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.1754545 = score(doc=5820,freq=4.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5714286 = coord(4/7)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.18
    0.18332823 = product of:
      0.32082438 = sum of:
        0.041355025 = product of:
          0.12406507 = sum of:
            0.12406507 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12406507 = score(doc=701,freq=2.0), product of:
                0.3311239 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03905679 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12406507 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12406507 = score(doc=701,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.03133921 = weight(_text_:based in 701) [ClassicSimilarity], result of:
          0.03133921 = score(doc=701,freq=8.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 701, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12406507 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12406507 = score(doc=701,freq=2.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5714286 = coord(4/7)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Si, L.; Zhou, J.: Ontology and linked data of Chinese great sites information resources from users' perspective (2022) 0.07
    0.072738506 = product of:
      0.25458476 = sum of:
        0.02770021 = weight(_text_:based in 1115) [ClassicSimilarity], result of:
          0.02770021 = score(doc=1115,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23539014 = fieldWeight in 1115, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
        0.22688456 = weight(_text_:great in 1115) [ClassicSimilarity], result of:
          0.22688456 = score(doc=1115,freq=22.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            1.0316678 = fieldWeight in 1115, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
      0.2857143 = coord(2/7)
    
    Abstract
    Great Sites are closely related to the residents' life, urban and rural development. In the process of rapid urbanization in China, the protection and utilization of Great Sites are facing unprecedented pressure. Effective knowl­edge organization with ontology and linked data of Great Sites is a prerequisite for their protection and utilization. In this paper, an interview is conducted to understand the users' awareness towards Great Sites to build the user-centered ontology. As for designing the Great Site ontology, firstly, the scope of Great Sites is determined. Secondly, CIDOC- CRM and OWL-Time Ontology are reused combining the results of literature research and user interviews. Thirdly, the top-level structure and the specific instances are determined to extract knowl­edge concepts of Great Sites. Fourthly, they are transformed into classes, data properties and object properties of the Great Site ontology. Later, based on the linked data technology, taking the Great Sites in Xi'an Area as an example, this paper uses D2RQ to publish the linked data set of the knowl­edge of the Great Sites and realize its opening and sharing. Semantic services such as semantic annotation, semantic retrieval and reasoning are provided based on the ontology.
  5. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.05
    0.04575763 = product of:
      0.1067678 = sum of:
        0.04145788 = weight(_text_:based in 4399) [ClassicSimilarity], result of:
          0.04145788 = score(doc=4399,freq=14.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.35229972 = fieldWeight in 4399, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.05472661 = weight(_text_:great in 4399) [ClassicSimilarity], result of:
          0.05472661 = score(doc=4399,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.24884763 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.010583311 = product of:
          0.021166623 = sum of:
            0.021166623 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.021166623 = score(doc=4399,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Date
    20. 1.2015 18:30:22
  6. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.04
    0.03519811 = product of:
      0.12319338 = sum of:
        0.02742181 = weight(_text_:based in 3061) [ClassicSimilarity], result of:
          0.02742181 = score(doc=3061,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 3061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
        0.09577157 = weight(_text_:great in 3061) [ClassicSimilarity], result of:
          0.09577157 = score(doc=3061,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.43548337 = fieldWeight in 3061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
      0.2857143 = coord(2/7)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  7. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.03
    0.034705512 = product of:
      0.080979526 = sum of:
        0.015669605 = weight(_text_:based in 179) [ClassicSimilarity], result of:
          0.015669605 = score(doc=179,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.13315678 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.05472661 = weight(_text_:great in 179) [ClassicSimilarity], result of:
          0.05472661 = score(doc=179,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.24884763 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.010583311 = product of:
          0.021166623 = sum of:
            0.021166623 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.021166623 = score(doc=179,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  8. Wei, W.; Liu, Y.-P.; Wei, L-R.: Feature-level sentiment analysis based on rules and fine-grained domain ontology (2020) 0.03
    0.03295148 = product of:
      0.115330175 = sum of:
        0.03324025 = weight(_text_:based in 5876) [ClassicSimilarity], result of:
          0.03324025 = score(doc=5876,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28246817 = fieldWeight in 5876, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
        0.08208992 = weight(_text_:great in 5876) [ClassicSimilarity], result of:
          0.08208992 = score(doc=5876,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.37327147 = fieldWeight in 5876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
      0.2857143 = coord(2/7)
    
    Abstract
    Mining product reviews and sentiment analysis are of great significance, whether for academic research purposes or optimizing business strategies. We propose a feature-level sentiment analysis framework based on rules parsing and fine-grained domain ontology for Chinese reviews. Fine-grained ontology is used to describe synonymous expressions of product features, which are reflected in word changes in online reviews. First, a semiautomatic construction method is developed by using Word2Vec for fine-grained ontology. Then, featurelevel sentiment analysis that combines rules parsing and the fine-grained domain ontology is conducted to extract explicit and implicit features from product reviews. Finally, the domain sentiment dictionary and context sentiment dictionary are established to identify sentiment polarities for the extracted feature-sentiment combinations. An experiment is conducted on the basis of product reviews crawled from Chinese e-commerce websites. The results demonstrate the effectiveness of our approach.
  9. Lim, S.C.J.; Liu, Y.; Lee, W.B.: ¬A methodology for building a semantically annotated multi-faceted ontology for product family modelling (2011) 0.02
    0.02339062 = product of:
      0.081867166 = sum of:
        0.027140552 = weight(_text_:based in 1485) [ClassicSimilarity], result of:
          0.027140552 = score(doc=1485,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.2306343 = fieldWeight in 1485, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=1485)
        0.05472661 = weight(_text_:great in 1485) [ClassicSimilarity], result of:
          0.05472661 = score(doc=1485,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.24884763 = fieldWeight in 1485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1485)
      0.2857143 = coord(2/7)
    
    Abstract
    Product family design is one of the prevailing approaches in realizing mass customization. With the increasing number of product offerings targeted at different market segments, the issue of information management in product family design, that is related to an efficient and effective storage, sharing and timely retrieval of design information, has become more complicated and challenging. Product family modelling schema reported in the literature generally stress the component aspects of a product family and its analysis, with a limited capability to model complex inter-relations between physical components and other required information in different semantic orientations, such as manufacturing, material and marketing wise. To tackle this problem, ontology-based representation has been identified as a promising solution to redesign product platforms especially in a semantically rich environment. However, ontology development in design engineering demands a great deal of time commitment and human effort to process complex information. When a large variety of products are available, particularly in the consumer market, a more efficient method for building a product family ontology with the incorporation of multi-faceted semantic information is therefore highly desirable. In this study, we propose a methodology for building a semantically annotated multi-faceted ontology for product family modelling that is able to automatically suggest semantically-related annotations based on the design and manufacturing repository. The six steps of building such ontology: formation of product family taxonomy; extraction of entities; faceted unit generation and concept identification; facet modelling and semantic annotation; formation of a semantically annotated multi-faceted product family ontology (MFPFO); and ontology validation and evaluation are discussed in detail. Using a family of laptop computers as an illustrative example, we demonstrate how our methodology can be deployed step by step to create a semantically annotated MFPFO. Finally, we briefly discuss future research issues as well as interesting applications that can be further pursued based on the MFPFO developed.
  10. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.02
    0.021516455 = product of:
      0.07530759 = sum of:
        0.02742181 = weight(_text_:based in 2362) [ClassicSimilarity], result of:
          0.02742181 = score(doc=2362,freq=8.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 2362, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
        0.047885787 = weight(_text_:great in 2362) [ClassicSimilarity], result of:
          0.047885787 = score(doc=2362,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.21774168 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  11. Noy, N.F.: Knowledge representation for intelligent information retrieval in experimental sciences (1997) 0.02
    0.020113206 = product of:
      0.070396215 = sum of:
        0.015669605 = weight(_text_:based in 694) [ClassicSimilarity], result of:
          0.015669605 = score(doc=694,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.13315678 = fieldWeight in 694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
        0.05472661 = weight(_text_:great in 694) [ClassicSimilarity], result of:
          0.05472661 = score(doc=694,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.24884763 = fieldWeight in 694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
      0.2857143 = coord(2/7)
    
    Abstract
    More and more information is available on-line every day. The greater the amount of on-line information, the greater the demand for tools that process and disseminate this information. Processing electronic information in the form of text and answering users' queries about that information intelligently is one of the great challenges in natural language processing and information retrieval. The research presented in this talk is centered on the latter of these two tasks: intelligent information retrieval. In order for information to be retrieved, it first needs to be formalized in a database or knowledge base. The ontology for this formalization and assumptions it is based on are crucial to successful intelligent information retrieval. We have concentrated our effort on developing an ontology for representing knowledge in the domains of experimental sciences, molecular biology in particular. We show that existing ontological models cannot be readily applied to represent this domain adequately. For example, the fundamental notion of ontology design that every "real" object is defined as an instance of a category seems incompatible with the universe where objects can change their category as a result of experimental procedures. Another important problem is representing complex structures such as DNA, mixtures, populations of molecules, etc., that are very common in molecular biology. We present extensions that need to be made to an ontology to cover these issues: the representation of transformations that change the structure and/or category of their participants, and the component relations and spatial structures of complex objects. We demonstrate examples of how the proposed representations can be used to improve the quality and completeness of answers to user queries; discuss techniques for evaluating ontologies and show a prototype of an Information Retrieval System that we developed.
  12. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.02
    0.01781786 = product of:
      0.062362507 = sum of:
        0.05310211 = weight(_text_:based in 1633) [ClassicSimilarity], result of:
          0.05310211 = score(doc=1633,freq=30.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.45124975 = fieldWeight in 1633, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.009260397 = product of:
          0.018520795 = sum of:
            0.018520795 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
              0.018520795 = score(doc=1633,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.1354154 = fieldWeight in 1633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1633)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
  13. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.02
    0.01616737 = product of:
      0.056585796 = sum of:
        0.04071083 = weight(_text_:based in 2230) [ClassicSimilarity], result of:
          0.04071083 = score(doc=2230,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.34595144 = fieldWeight in 2230, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2230)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
              0.031749934 = score(doc=2230,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
  14. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.01
    0.01497233 = product of:
      0.052403152 = sum of:
        0.039174013 = weight(_text_:based in 2831) [ClassicSimilarity], result of:
          0.039174013 = score(doc=2831,freq=8.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.33289194 = fieldWeight in 2831, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
              0.026458278 = score(doc=2831,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 2831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2831)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  15. Biagetti, M.T.: Ontologies as knowledge organization systems (2021) 0.01
    0.013681654 = product of:
      0.09577157 = sum of:
        0.09577157 = weight(_text_:great in 439) [ClassicSimilarity], result of:
          0.09577157 = score(doc=439,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.43548337 = fieldWeight in 439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
      0.14285715 = coord(1/7)
    
    Abstract
    This contribution presents the principal features of ontologies, drawing special attention to the comparison between ontologies and the different kinds of know­ledge organization systems (KOS). The focus is on the semantic richness exhibited by ontologies, which allows the creation of a great number of relationships between terms. That establishes ontologies as the most evolved type of KOS. The concepts of "conceptualization" and "formalization" and the key components of ontologies are described and discussed, along with upper and domain ontologies and special typologies, such as bibliographical ontologies and biomedical ontologies. The use of ontologies in the digital libraries environment, where they have replaced thesauri for query expansion in searching, and the role they are playing in the Semantic Web, especially for semantic interoperability, are sketched.
  16. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.01
    0.013126459 = product of:
      0.045942605 = sum of:
        0.02742181 = weight(_text_:based in 3694) [ClassicSimilarity], result of:
          0.02742181 = score(doc=3694,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 3694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3694)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.03704159 = score(doc=3694,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
  17. Madalli, D.P.; Balaji, B.P.; Sarangi, A.K.: Music domain analysis for building faceted ontological representation (2014) 0.01
    0.013126459 = product of:
      0.045942605 = sum of:
        0.02742181 = weight(_text_:based in 1437) [ClassicSimilarity], result of:
          0.02742181 = score(doc=1437,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 1437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1437)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 1437) [ClassicSimilarity], result of:
              0.03704159 = score(doc=1437,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 1437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1437)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper describes to construct faceted ontologies for domain modeling. Building upon the faceted theory of S.R. Ranganathan (1967), the paper intends to address the faceted classification approach applied to build domain ontologies. As classificatory ontologies are employed to represent the relationships of entities and objects on the web, the faceted approach helps to analyze domain representation in an effective way for modeling. Based on this perspective, an ontology of the music domain has been analyzed that would serve as a case study.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.01
    0.0119778635 = product of:
      0.04192252 = sum of:
        0.03133921 = weight(_text_:based in 1436) [ClassicSimilarity], result of:
          0.03133921 = score(doc=1436,freq=8.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 1436, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.010583311 = product of:
          0.021166623 = sum of:
            0.021166623 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
              0.021166623 = score(doc=1436,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.15476047 = fieldWeight in 1436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1436)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  19. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.01
    0.01169531 = product of:
      0.040933583 = sum of:
        0.013570276 = weight(_text_:based in 4472) [ClassicSimilarity], result of:
          0.013570276 = score(doc=4472,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.11531715 = fieldWeight in 4472, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.027363306 = weight(_text_:great in 4472) [ClassicSimilarity], result of:
          0.027363306 = score(doc=4472,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.12442382 = fieldWeight in 4472, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
      0.2857143 = coord(2/7)
    
    Abstract
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
  20. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.01
    0.0116941 = product of:
      0.040929347 = sum of:
        0.02770021 = weight(_text_:based in 4607) [ClassicSimilarity], result of:
          0.02770021 = score(doc=4607,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23539014 = fieldWeight in 4607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.026458278 = score(doc=4607,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a

Years

Languages

  • e 231
  • d 14
  • f 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 193
  • el 67
  • x 14
  • m 12
  • s 5
  • n 4
  • p 2
  • A 1
  • EL 1
  • r 1
  • More… Less…

Subjects