Search (78 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.11
    0.10909095 = product of:
      0.27272737 = sum of:
        0.06818184 = product of:
          0.20454551 = sum of:
            0.20454551 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.20454551 = score(doc=400,freq=2.0), product of:
                0.36394832 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042928502 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.20454551 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.20454551 = score(doc=400,freq=2.0), product of:
            0.36394832 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042928502 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.4 = coord(2/5)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.10
    0.095320776 = product of:
      0.23830193 = sum of:
        0.04545456 = product of:
          0.13636369 = sum of:
            0.13636369 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.13636369 = score(doc=5820,freq=2.0), product of:
                0.36394832 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042928502 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.19284737 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.19284737 = score(doc=5820,freq=4.0), product of:
            0.36394832 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042928502 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.4 = coord(2/5)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.07
    0.0727273 = product of:
      0.18181825 = sum of:
        0.04545456 = product of:
          0.13636369 = sum of:
            0.13636369 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13636369 = score(doc=701,freq=2.0), product of:
                0.36394832 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042928502 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.13636369 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13636369 = score(doc=701,freq=2.0), product of:
            0.36394832 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042928502 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Wei, W.; Liu, Y.-P.; Wei, L-R.: Feature-level sentiment analysis based on rules and fine-grained domain ontology (2020) 0.07
    0.06521659 = product of:
      0.16304147 = sum of:
        0.07281394 = weight(_text_:business in 5876) [ClassicSimilarity], result of:
          0.07281394 = score(doc=5876,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 5876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
        0.09022752 = weight(_text_:great in 5876) [ClassicSimilarity], result of:
          0.09022752 = score(doc=5876,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.37327147 = fieldWeight in 5876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
      0.4 = coord(2/5)
    
    Abstract
    Mining product reviews and sentiment analysis are of great significance, whether for academic research purposes or optimizing business strategies. We propose a feature-level sentiment analysis framework based on rules parsing and fine-grained domain ontology for Chinese reviews. Fine-grained ontology is used to describe synonymous expressions of product features, which are reflected in word changes in online reviews. First, a semiautomatic construction method is developed by using Word2Vec for fine-grained ontology. Then, featurelevel sentiment analysis that combines rules parsing and the fine-grained domain ontology is conducted to extract explicit and implicit features from product reviews. Finally, the domain sentiment dictionary and context sentiment dictionary are established to identify sentiment polarities for the extracted feature-sentiment combinations. An experiment is conducted on the basis of product reviews crawled from Chinese e-commerce websites. The results demonstrate the effectiveness of our approach.
  5. Si, L.; Zhou, J.: Ontology and linked data of Chinese great sites information resources from users' perspective (2022) 0.05
    0.049875136 = product of:
      0.24937569 = sum of:
        0.24937569 = weight(_text_:great in 1115) [ClassicSimilarity], result of:
          0.24937569 = score(doc=1115,freq=22.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            1.0316678 = fieldWeight in 1115, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
      0.2 = coord(1/5)
    
    Abstract
    Great Sites are closely related to the residents' life, urban and rural development. In the process of rapid urbanization in China, the protection and utilization of Great Sites are facing unprecedented pressure. Effective knowl­edge organization with ontology and linked data of Great Sites is a prerequisite for their protection and utilization. In this paper, an interview is conducted to understand the users' awareness towards Great Sites to build the user-centered ontology. As for designing the Great Site ontology, firstly, the scope of Great Sites is determined. Secondly, CIDOC- CRM and OWL-Time Ontology are reused combining the results of literature research and user interviews. Thirdly, the top-level structure and the specific instances are determined to extract knowl­edge concepts of Great Sites. Fourthly, they are transformed into classes, data properties and object properties of the Great Site ontology. Later, based on the linked data technology, taking the Great Sites in Xi'an Area as an example, this paper uses D2RQ to publish the linked data set of the knowl­edge of the Great Sites and realize its opening and sharing. Semantic services such as semantic annotation, semantic retrieval and reasoning are provided based on the ontology.
  6. Baumer, C.; Reichenberger, K.: Business Semantics - Praxis und Perspektiven (2006) 0.04
    0.038834102 = product of:
      0.1941705 = sum of:
        0.1941705 = weight(_text_:business in 6020) [ClassicSimilarity], result of:
          0.1941705 = score(doc=6020,freq=8.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.894193 = fieldWeight in 6020, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0625 = fieldNorm(doc=6020)
      0.2 = coord(1/5)
    
    Abstract
    Der Artikel führt in semantische Technologien ein und gewährt Einblick in unterschiedliche Entwicklungsrichtungen. Insbesondere werden Business Semantics vorgestellt und vom Semantic Web abgegrenzt. Die Stärken von Business Semantics werden speziell an den Praxisbeispielen des Knowledge Portals und dem Projekt "Knowledge Base" der Wienerberger AG veranschaulicht. So werden die Anforderungen - was brauchen Anwendungen in Unternehmen heute - und die Leistungsfähigkeit von Systemen - was bieten Business Semantics - konkretisiert und gegenübergestellt.
  7. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.03
    0.028713647 = product of:
      0.07178412 = sum of:
        0.060151678 = weight(_text_:great in 4399) [ClassicSimilarity], result of:
          0.060151678 = score(doc=4399,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.24884763 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.011632439 = product of:
          0.023264877 = sum of:
            0.023264877 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.023264877 = score(doc=4399,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Date
    20. 1.2015 18:30:22
  8. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.03
    0.028713647 = product of:
      0.07178412 = sum of:
        0.060151678 = weight(_text_:great in 179) [ClassicSimilarity], result of:
          0.060151678 = score(doc=179,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.24884763 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.011632439 = product of:
          0.023264877 = sum of:
            0.023264877 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.023264877 = score(doc=179,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
  9. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.02
    0.021053089 = product of:
      0.10526544 = sum of:
        0.10526544 = weight(_text_:great in 3061) [ClassicSimilarity], result of:
          0.10526544 = score(doc=3061,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.43548337 = fieldWeight in 3061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
      0.2 = coord(1/5)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  10. Biagetti, M.T.: Ontologies as knowledge organization systems (2021) 0.02
    0.021053089 = product of:
      0.10526544 = sum of:
        0.10526544 = weight(_text_:great in 439) [ClassicSimilarity], result of:
          0.10526544 = score(doc=439,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.43548337 = fieldWeight in 439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
      0.2 = coord(1/5)
    
    Abstract
    This contribution presents the principal features of ontologies, drawing special attention to the comparison between ontologies and the different kinds of know­ledge organization systems (KOS). The focus is on the semantic richness exhibited by ontologies, which allows the creation of a great number of relationships between terms. That establishes ontologies as the most evolved type of KOS. The concepts of "conceptualization" and "formalization" and the key components of ontologies are described and discussed, along with upper and domain ontologies and special typologies, such as bibliographical ontologies and biomedical ontologies. The use of ontologies in the digital libraries environment, where they have replaced thesauri for query expansion in searching, and the role they are playing in the Semantic Web, especially for semantic interoperability, are sketched.
  11. Semantic technologies in content management systems : trends, applications and evaluations (2012) 0.02
    0.01681566 = product of:
      0.084078304 = sum of:
        0.084078304 = weight(_text_:business in 4893) [ClassicSimilarity], result of:
          0.084078304 = score(doc=4893,freq=6.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.38719696 = fieldWeight in 4893, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.03125 = fieldNorm(doc=4893)
      0.2 = coord(1/5)
    
    Abstract
    Content Management Systems (CMSs) are used in almost every industry by millions of end-user organizations. In contrast to the 90s, they are no longer used as isolated applications in one organization but they support critical core operations in business ecosystems. Content management today is more interactive and more integrative: interactive because end-users are increasingly content creators themselves and integrative because content elements can be embedded into various other applications. The authors of this book investigate how Semantic Technologies can increase interactivity and integration capabilities of CMSs and discuss their business value to millions of end-user organizations. This book has therefore the objective, to reflect existing applications as well as to discuss and present new applications for CMSs that use Semantic Technologies. An evaluation of 27 CMSs concludes this book and provides a basis for IT executives that plan to adopt or replace a CMS in the near future.
    Series
    Springer eBook Collection : Business and Economics
  12. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.02
    0.015037919 = product of:
      0.0751896 = sum of:
        0.0751896 = weight(_text_:great in 2861) [ClassicSimilarity], result of:
          0.0751896 = score(doc=2861,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.31105953 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.2 = coord(1/5)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  13. Sigel, A.: Organisation verteilten Wissens mit semantischen Wissensnetzen und der Aggregation semantischer Wissensdienste am Beispiel Digitale Bibliotheken/Kulturelles Erbe (2006) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 5890) [ClassicSimilarity], result of:
          0.07281394 = score(doc=5890,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 5890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=5890)
      0.2 = coord(1/5)
    
    Abstract
    Der Beitrag berichtet über Grundideen in der Explorationsphase des Projektes kPeer (Knowledge Peers). Gegenstand ist die dezentrale Organisation, Integration und Aggregation von Wissen mit semantischen Wissenstechnologien in verteilten, heterogenen Umgebungen Dabei sollen Wissensarbeiter, die dezentral und voneinander unabhängig Wissen gemäß lokaler Schemata ausdrücken und organisieren, emergent zusammenwirken, so dass sich eine nützliche gemeinsame Wissensorganisation ergibt. Zudem sollen Aussagen zum selben Aussagegegenstand, die digitalisiert vorliegen, virtuell zusammengeführt werden, um so neue wissensintensive Produkte und Dienstleistungen zu ermöglichen. Als Inspirationsquelle für beabsichtigte Anwendungen im verteilten Wissensmanagement (DKM) und e-business werden Beispiele der Wissensintegration aus dem Bereich Digitale Bibliotheken und Kulturelles Erbe herangezogen.
  14. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 3708) [ClassicSimilarity], result of:
          0.07281394 = score(doc=3708,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 3708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
      0.2 = coord(1/5)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
  15. Wunner, T.; Buitelaar, P.; O'Riain, S.: Semantic, terminological and linguistic interpretation of XBRL (2010) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 1122) [ClassicSimilarity], result of:
          0.07281394 = score(doc=1122,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 1122, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=1122)
      0.2 = coord(1/5)
    
    Abstract
    Standardization efforts in fnancial reporting have led to large numbers of machine-interpretable vocabularies that attempt to model complex accounting practices in XBRL (eXtended Business Reporting Language). Because reporting agencies do not require fine-grained semantic and terminological representations, these vocabularies cannot be easily reused. Ontology-based Information Extraction, in particular, requires much greater semantic and terminological structure, and the introduction of a linguistic structure currently absent from XBRL. In order to facilitate such reuse, we propose a three-faceted methodology that analyzes and enriches the XBRL vocabulary: (1) transform semantic structure by analyzing the semantic relationships between terms (e.g. taxonomic, meronymic); (2) enhance terminological structure by using several domain-specific (XBRL), domain-related (SAPTerm, etc.) and domain-independent (GoogleDefine, Wikipedia, etc.) terminologies; and (3) add linguistic structure at term level (e.g. part-of-speech, morphology, syntactic arguments). This paper outlines a first experiment towards implementing this methodology on the International Financial Reporting Standard XBRL vocabulary.
  16. Veltman, K.H.: Towards a Semantic Web for culture 0.01
    0.013729929 = product of:
      0.06864964 = sum of:
        0.06864964 = weight(_text_:business in 4040) [ClassicSimilarity], result of:
          0.06864964 = score(doc=4040,freq=4.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.31614497 = fieldWeight in 4040, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.03125 = fieldNorm(doc=4040)
      0.2 = coord(1/5)
    
    Abstract
    Today's semantic web deals with meaning in a very restricted sense and offers static solutions. This is adequate for many scientific, technical purposes and for business transactions requiring machine-to-machine communication, but does not answer the needs of culture. Science, technology and business are concerned primarily with the latest findings, the state of the art, i.e. the paradigm or dominant world-view of the day. In this context, history is considered non-essential because it deals with things that are out of date. By contrast, culture faces a much larger challenge, namely, to re-present changes in ways of knowing; changing meanings in different places at a given time (synchronically) and over time (diachronically). Culture is about both objects and the commentaries on them; about a cumulative body of knowledge; about collective memory and heritage. Here, history plays a central role and older does not mean less important or less relevant. Hence, a Leonardo painting that is 400 years old, or a Greek statue that is 2500 years old, typically have richer commentaries and are often more valuable than their contemporary equivalents. In this context, the science of meaning (semantics) is necessarily much more complex than semantic primitives. A semantic web in the cultural domain must enable us to trace how meaning and knowledge organisation have evolved historically in different cultures. This paper examines five issues to address this challenge: 1) different world-views (i.e. a shift from substance to function and from ontology to multiple ontologies); 2) developments in definitions and meaning; 3) distinctions between words and concepts; 4) new classes of relations; and 5) dynamic models of knowledge organisation. These issues reveal that historical dimensions of cultural diversity in knowledge organisation are also central to classification of biological diversity. New ways are proposed of visualizing knowledge using a time/space horizon to distinguish between universals and particulars. It is suggested that new visualization methods make possible a history of questions as well as of answers, thus enabling dynamic access to cultural and historical dimensions of knowledge. Unlike earlier media, which were limited to recording factual dimensions of collective memory, digital media enable us to explore theories, ways of perceiving, ways of knowing; to enter into other mindsets and world-views and thus to attain novel insights and new levels of tolerance. Some practical consequences are outlined.
  17. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.01
    0.013729929 = product of:
      0.06864964 = sum of:
        0.06864964 = weight(_text_:business in 117) [ClassicSimilarity], result of:
          0.06864964 = score(doc=117,freq=4.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.31614497 = fieldWeight in 117, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
      0.2 = coord(1/5)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  18. McGuinness, D.L.: Ontologies come of age (2003) 0.01
    0.012135657 = product of:
      0.06067828 = sum of:
        0.06067828 = weight(_text_:business in 3084) [ClassicSimilarity], result of:
          0.06067828 = score(doc=3084,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.2794353 = fieldWeight in 3084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3084)
      0.2 = coord(1/5)
    
    Abstract
    Ontologies have moved beyond the domains of library science, philosophy, and knowledge representation. They are now the concerns of marketing departments, CEOs, and mainstream business. Research analyst companies such as Forrester Research report on the critical roles of ontologies in support of browsing and search for e-commerce and in support of interoperability for facilitation of knowledge management and configuration. One now sees ontologies used as central controlled vocabularies that are integrated into catalogues, databases, web publications, knowledge management applications, etc. Large ontologies are essential components in many online applications including search (such as Yahoo and Lycos), e-commerce (such as Amazon and eBay), configuration (such as Dell and PC-Order), etc. One also sees ontologies that have long life spans, sometimes in multiple projects (such as UMLS, SIC codes, etc.). Such diverse usage generates many implications for ontology environments. In this paper, we will discuss ontologies and requirements in their current instantiations on the web today. We will describe some desirable properties of ontologies. We will also discuss how both simple and complex ontologies are being and may be used to support varied applications. We will conclude with a discussion of emerging trends in ontologies and their environments and briefly mention our evolving ontology evolution environment.
  19. Hepp, M.; Bruijn, J. de: GenTax : a generic methodology for deriving OWL and RDF-S ontologies from hierarchical classifications, thesauri, and inconsistent taxonomies (2007) 0.01
    0.012135657 = product of:
      0.06067828 = sum of:
        0.06067828 = weight(_text_:business in 4692) [ClassicSimilarity], result of:
          0.06067828 = score(doc=4692,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.2794353 = fieldWeight in 4692, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4692)
      0.2 = coord(1/5)
    
    Abstract
    Hierarchical classifications, thesauri, and informal taxonomies are likely the most valuable input for creating, at reasonable cost, non-toy ontologies in many domains. They contain, readily available, a wealth of category definitions plus a hierarchy, and they reflect some degree of community consensus. However, their transformation into useful ontologies is not as straightforward as it appears. In this paper, we show that (1) it often depends on the context of usage whether an informal hierarchical categorization schema is a classification, a thesaurus, or a taxonomy, and (2) present a novel methodology for automatically deriving consistent RDF-S and OWL ontologies from such schemas. Finally, we (3) demonstrate the usefulness of this approach by transforming the two e-business categorization standards eCl@ss and UNSPSC into ontologies that overcome the limitations of earlier prototypes. Our approach allows for the script-based creation of meaningful ontology classes for a particular context while preserving the original hierarchy, even if the latter is not a real subsumption hierarchy in this particular context. Human intervention in the transformation is limited to checking some conceptual properties and identifying frequent anomalies, and the only input required is an informal categorization plus a notion of the target context. In particular, the approach does not require instance data, as ontology learning approaches would usually do.
  20. Herre, H.: Formal ontology and the foundation of knowledge organization (2013) 0.01
    0.012135657 = product of:
      0.06067828 = sum of:
        0.06067828 = weight(_text_:business in 776) [ClassicSimilarity], result of:
          0.06067828 = score(doc=776,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.2794353 = fieldWeight in 776, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0390625 = fieldNorm(doc=776)
      0.2 = coord(1/5)
    
    Abstract
    Research in ontology has, in recent years, become widespread in the field of information systems, in various areas of sciences, in business, in economy, and in industry. The importance of ontologies is increasingly recognized in fields diverse as in e-commerce, semantic web, enterprise, information integration, information science, qualitative modeling of physical systems, natural language processing, knowledge engineering, and databases. Ontologies provide formal specifications and harmonized definitions of concepts used to represent knowledge of specific domains. An ontology supplies a unifying framework for communication, it establishes a basis for knowledge organization and knowledge representation and contributes to theory formation and modeling of a specific domain. In the current paper, we present and discuss principles of organization and representation of knowledge that grew out of the use of formal ontology. The core of the discussed ontological framework is a top-level ontology, called GFO (General Formal Ontology), which is being developed at the University of Leipzig. These principles make use of the onto-axiomatic method, of graduated conceptualizations, of levels of reality, and of top-level-supported methods for ontology-development. We explore the interrelations between formal ontology and knowledge organization, and argue for a close interaction between both fields

Authors

Years

Languages

  • e 62
  • d 14
  • f 1
  • pt 1
  • More… Less…

Types

  • a 56
  • el 20
  • x 8
  • m 3
  • n 1
  • r 1
  • s 1
  • More… Less…