Search (157 results, page 2 of 8)

  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[2010 TO 2020}
  1. Manaf, N.A. Abdul; Bechhofer, S.; Stevens, R.: ¬The current state of SKOS vocabularies on the Web (2012) 0.03
    0.032260682 = product of:
      0.09678204 = sum of:
        0.064012155 = weight(_text_:web in 266) [ClassicSimilarity], result of:
          0.064012155 = score(doc=266,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4416067 = fieldWeight in 266, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=266)
        0.03276989 = weight(_text_:computer in 266) [ClassicSimilarity], result of:
          0.03276989 = score(doc=266,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=266)
      0.33333334 = coord(2/6)
    
    Abstract
    We present a survey of the current state of Simple Knowledge Organization System (SKOS) vocabularies on the Web. Candidate vocabularies were gathered through collections and web crawling, with 478 identified as complying to a given definition of a SKOS vocabulary. Analyses were then conducted that included investigation of the use of SKOS constructs; the use of SKOS semantic relations and lexical labels; and the structure of vocabularies in terms of the hierarchical and associative relations, branching factors and the depth of the vocabularies. Even though SKOS concepts are considered to be the core of SKOS vocabularies, our findings were that not all SKOS vocabularies published explicitly declared SKOS concepts in the vocabularies. Almost one-third of th SKOS vocabularies collected fall into the category of term lists, with no use of any SKOS semantic relations. As concept labelling is core to SKOS vocabularies, a surprising find is that not all SKOS vocabularies use SKOS lexical labels, whether skos:prefLabel or skos:altLabel, for their concepts. The branching factors and maximum depth of the vocabularies have no direct relationship to the size of the vocabularies. We also observed some common modelling slips found in SKOS vocabularies. The survey is useful when considering, for example, converting artefacts such as OWL ontologies into SKOS, where a definition of typicality of SKOS vocabularies could be used to guide the conversion. Moreover, the survey results can serve to provide a better understanding of the modelling styles of the SKOS vocabularies published on the Web, especially when considering the creation of applications that utilize these vocabularies.
    Series
    Lecture notes in computer science; 7295
    Source
    9th Extended Semantic Web Conference (ESWC), 2012-05-27/2012-05-31 in Hersonissos, Crete, Greece. Eds.: Elena Simperl et al
    Theme
    Semantic Web
  2. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.03
    0.032260682 = product of:
      0.09678204 = sum of:
        0.064012155 = weight(_text_:web in 3936) [ClassicSimilarity], result of:
          0.064012155 = score(doc=3936,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4416067 = fieldWeight in 3936, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
        0.03276989 = weight(_text_:computer in 3936) [ClassicSimilarity], result of:
          0.03276989 = score(doc=3936,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 3936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
      0.33333334 = coord(2/6)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  3. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.03
    0.031876236 = product of:
      0.09562871 = sum of:
        0.06553978 = weight(_text_:computer in 4523) [ClassicSimilarity], result of:
          0.06553978 = score(doc=4523,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 4523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=4523)
        0.030088935 = product of:
          0.06017787 = sum of:
            0.06017787 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.06017787 = score(doc=4523,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  4. Lange, C.: Ontologies and languages for representing mathematical knowledge on the Semantic Web (2011) 0.03
    0.031851403 = product of:
      0.09555421 = sum of:
        0.0693383 = weight(_text_:web in 135) [ClassicSimilarity], result of:
          0.0693383 = score(doc=135,freq=22.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.47835067 = fieldWeight in 135, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=135)
        0.02621591 = weight(_text_:computer in 135) [ClassicSimilarity], result of:
          0.02621591 = score(doc=135,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.16150802 = fieldWeight in 135, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=135)
      0.33333334 = coord(2/6)
    
    Abstract
    Mathematics is a ubiquitous foundation of science, technology, and engineering. Specific areas, such as numeric and symbolic computation or logics, enjoy considerable software support. Working mathematicians have recently started to adopt Web 2.0 environment, such as blogs and wikis, but these systems lack machine support for knowledge organization and reuse, and they are disconnected from tools such as computer algebra systems or interactive proof assistants.We argue that such scenarios will benefit from Semantic Web technology. Conversely, mathematics is still underrepresented on the Web of [Linked] Data. There are mathematics-related Linked Data, for example statistical government data or scientific publication databases, but their mathematical semantics has not yet been modeled. We argue that the services for the Web of Data will benefit from a deeper representation of mathematical knowledge. Mathematical knowledge comprises logical and functional structures - formulæ, statements, and theories -, a mixture of rigorous natural language and symbolic notation in documents, application-specific metadata, and discussions about conceptualizations, formalizations, proofs, and (counter-)examples. Our review of approaches to representing these structures covers ontologies for mathematical problems, proofs, interlinked scientific publications, scientific discourse, as well as mathematical metadata vocabularies and domain knowledge from pure and applied mathematics. Many fields of mathematics have not yet been implemented as proper Semantic Web ontologies; however, we show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to theWeb of Data. We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to interlink them, and how to take advantage of these new connections.
    Content
    Vgl.: http://www.semantic-web-journal.net/content/ontologies-and-languages-representing-mathematical-knowledge-semantic-web http://www.semantic-web-journal.net/sites/default/files/swj122_2.pdf.
    Source
    Semantic Web journal. 2(2012), no.x
  5. Stuart, D.: Practical ontologies for information professionals (2016) 0.03
    0.030522084 = product of:
      0.09156625 = sum of:
        0.033718713 = weight(_text_:wide in 5152) [ClassicSimilarity], result of:
          0.033718713 = score(doc=5152,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.171337 = fieldWeight in 5152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5152)
        0.05784754 = weight(_text_:web in 5152) [ClassicSimilarity], result of:
          0.05784754 = score(doc=5152,freq=20.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.39907828 = fieldWeight in 5152, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5152)
      0.33333334 = coord(2/6)
    
    Abstract
    Practical Ontologies for Information Professionals provides an accessible introduction and exploration of ontologies and demonstrates their value to information professionals. More data and information is being created than ever before. Ontologies, formal representations of knowledge with rich semantic relationships, have become increasingly important in the context of today's information overload and data deluge. The publishing and sharing of explicit explanations for a wide variety of conceptualizations, in a machine readable format, has the power to both improve information retrieval and discover new knowledge. Information professionals are key contributors to the development of new, and increasingly useful, ontologies. Practical Ontologies for Information Professionals provides an accessible introduction to the following: defining the concept of ontologies and why they are increasingly important to information professionals ontologies and the semantic web existing ontologies, such as RDF, RDFS, SKOS, and OWL2 adopting and building ontologies, showing how to avoid repetition of work and how to build a simple ontology interrogating ontologies for reuse the future of ontologies and the role of the information professional in their development and use. This book will be useful reading for information professionals in libraries and other cultural heritage institutions who work with digitalization projects, cataloguing and classification and information retrieval. It will also be useful to LIS students who are new to the field.
    Content
    C H A P T E R 1 What is an ontology?; Introduction; The data deluge and information overload; Defining terms; Knowledge organization systems and ontologies; Ontologies, metadata and linked data; What can an ontology do?; Ontologies and information professionals; Alternatives to ontologies; The aims of this book; The structure of this book; C H A P T E R 2 Ontologies and the semantic web; Introduction; The semantic web and linked data; Resource Description Framework (RDF); Classes, subclasses and properties; The semantic web stack; Embedded RDF; Alternative semantic visionsLibraries and the semantic web; Other cultural heritage institutions and the semantic web; Other organizations and the semantic web; Conclusion; C H A P T E R 3 Existing ontologies; Introduction; Ontology documentation; Ontologies for representing ontologies; Ontologies for libraries; Upper ontologies; Cultural heritage data models; Ontologies for the web; Conclusion; C H A P T E R 4 Adopting ontologies; Introduction; Reusing ontologies: application profiles and data models; Identifying ontologies; The ideal ontology discovery tool; Selection criteria; Conclusion C H A P T E R 5 Building ontologiesIntroduction; Approaches to building an ontology; The twelve steps; Ontology development example: Bibliometric Metrics Ontology element set; Conclusion; C H A P T E R 6 Interrogating ontologies; Introduction; Interrogating ontologies for reuse; Interrogating a knowledge base; Understanding ontology use; Conclusion; C H A P T E R 7 The future of ontologies and the information professional; Introduction; The future of ontologies for knowledge discovery; The future role of library and information professionals; The practical development of ontologies
    RSWK
    Ontologie <Wissensverarbeitung> / Semantic Web
    Subject
    Ontologie <Wissensverarbeitung> / Semantic Web
  6. Ma, X.; Carranza, E.J.M.; Wu, C.; Meer, F.D. van der; Liu, G.: ¬A SKOS-based multilingual thesaurus of geological time scale for interoperability of online geological maps (2011) 0.03
    0.028897699 = product of:
      0.08669309 = sum of:
        0.029565949 = weight(_text_:web in 4800) [ClassicSimilarity], result of:
          0.029565949 = score(doc=4800,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2039694 = fieldWeight in 4800, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4800)
        0.057127144 = product of:
          0.11425429 = sum of:
            0.11425429 = weight(_text_:programs in 4800) [ClassicSimilarity], result of:
              0.11425429 = score(doc=4800,freq=6.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.44373962 = fieldWeight in 4800, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4800)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The usefulness of online geological maps is hindered by linguistic barriers. Multilingual geoscience thesauri alleviate linguistic barriers of geological maps. However, the benefits of multilingual geoscience thesauri for online geological maps are less studied. In this regard, we developed a multilingual thesaurus of geological time scale (GTS) to alleviate linguistic barriers of GTS records among online geological maps. We extended the Simple Knowledge Organization System (SKOS) model to represent the ordinal hierarchical structure of GTS terms. We collected GTS terms in seven languages and encoded them into a thesaurus by using the extended SKOS model. We implemented methods of characteristic-oriented term retrieval in JavaScript programs for accessing Web Map Services (WMS), recognizing GTS terms, and making translations. With the developed thesaurus and programs, we set up a pilot system to test recognitions and translations of GTS terms in online geological maps. Results of this pilot system proved the accuracy of the developed thesaurus and the functionality of the developed programs. Therefore, with proper deployments, SKOS-based multilingual geoscience thesauri can be functional for alleviating linguistic barriers among online geological maps and, thus, improving their interoperability.
    Content
    Article Outline 1. Introduction 2. SKOS-based multilingual thesaurus of geological time scale 2.1. Addressing the insufficiency of SKOS in the context of the Semantic Web 2.2. Addressing semantics and syntax/lexicon in multilingual GTS terms 2.3. Extending SKOS model to capture GTS structure 2.4. Summary of building the SKOS-based MLTGTS 3. Recognizing and translating GTS terms retrieved from WMS 4. Pilot system, results, and evaluation 5. Discussion 6. Conclusions Vgl. unter: http://www.sciencedirect.com/science?_ob=MiamiImageURL&_cid=271720&_user=3865853&_pii=S0098300411000744&_check=y&_origin=&_coverDate=31-Oct-2011&view=c&wchp=dGLbVlt-zSkzS&_valck=1&md5=e2c1daf53df72d034d22278212578f42&ie=/sdarticle.pdf.
  7. Sánchez, D.; Batet, M.; Valls, A.; Gibert, K.: Ontology-driven web-based semantic similarity (2010) 0.03
    0.028345197 = product of:
      0.08503559 = sum of:
        0.052265707 = weight(_text_:web in 335) [ClassicSimilarity], result of:
          0.052265707 = score(doc=335,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 335, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=335)
        0.03276989 = weight(_text_:computer in 335) [ClassicSimilarity], result of:
          0.03276989 = score(doc=335,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=335)
      0.33333334 = coord(2/6)
    
    Abstract
    Estimation of the degree of semantic similarity/distance between concepts is a very common problem in research areas such as natural language processing, knowledge acquisition, information retrieval or data mining. In the past, many similarity measures have been proposed, exploiting explicit knowledge-such as the structure of a taxonomy-or implicit knowledge-such as information distribution. In the former case, taxonomies and/or ontologies are used to introduce additional semantics; in the latter case, frequencies of term appearances in a corpus are considered. Classical measures based on those premises suffer from some problems: in the ?rst case, their excessive dependency of the taxonomical/ontological structure; in the second case, the lack of semantics of a pure statistical analysis of occurrences and/or the ambiguity of estimating concept statistical distribution from term appearances. Measures based on Information Content (IC) of taxonomical concepts combine both approaches. However, they heavily depend on a properly pre-tagged and disambiguated corpus according to the ontological entities in order to computer accurate concept appearance probabilities. This limits the applicability of those measures to other ontologies - like specific domain ontologies - and massive corpus - like the Web. In this paper, several of the presente issues are analyzed. Modifications of classical similarity measures are also proposed. They are based on a contextualized and scalable version of IC computation in the Web by exploiting taxonomical knowledge. The goal is to avoid the measures' dependency on the corpus pre-processing to achieve reliable results and minimize language ambiguity. Our proposals are able to outperform classical approaches when using the Web for estimating concept probabilities.
  8. Semantische Technologien : Grundlagen - Konzepte - Anwendungen (2012) 0.03
    0.028060317 = product of:
      0.08418095 = sum of:
        0.05174041 = weight(_text_:web in 167) [ClassicSimilarity], result of:
          0.05174041 = score(doc=167,freq=16.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 167, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=167)
        0.032440536 = weight(_text_:computer in 167) [ClassicSimilarity], result of:
          0.032440536 = score(doc=167,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.19985598 = fieldWeight in 167, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=167)
      0.33333334 = coord(2/6)
    
    Abstract
    Dieses Lehrbuch bietet eine umfassende Einführung in Grundlagen, Potentiale und Anwendungen Semantischer Technologien. Es richtet sich an Studierende der Informatik und angrenzender Fächer sowie an Entwickler, die Semantische Technologien am Arbeitsplatz oder in verteilten Applikationen nutzen möchten. Mit seiner an praktischen Beispielen orientierten Darstellung gibt es aber auch Anwendern und Entscheidern in Unternehmen einen breiten Überblick über Nutzen und Möglichkeiten dieser Technologie. Semantische Technologien versetzen Computer in die Lage, Informationen nicht nur zu speichern und wieder zu finden, sondern sie ihrer Bedeutung entsprechend auszuwerten, zu verbinden, zu Neuem zu verknüpfen, und so flexibel und zielgerichtet nützliche Leistungen zu erbringen. Das vorliegende Buch stellt im ersten Teil die als Semantische Technologien bezeichneten Techniken, Sprachen und Repräsentationsformalismen vor. Diese Elemente erlauben es, das in Informationen enthaltene Wissen formal und damit für den Computer verarbeitbar zu beschreiben, Konzepte und Beziehungen darzustellen und schließlich Inhalte zu erfragen, zu erschließen und in Netzen zugänglich zu machen. Der zweite Teil beschreibt, wie mit Semantischen Technologien elementare Funktionen und umfassende Dienste der Informations- und Wissensverarbeitung realisiert werden können. Hierzu gehören etwa die Annotation und das Erschließen von Information, die Suche in den resultierenden Strukturen, das Erklären von Bedeutungszusammenhängen sowie die Integration einzelner Komponenten in komplexe Ablaufprozesse und Anwendungslösungen. Der dritte Teil beschreibt schließlich vielfältige Anwendungsbeispiele in unterschiedlichen Bereichen und illustriert so Mehrwert, Potenzial und Grenzen von Semantischen Technologien. Die dargestellten Systeme reichen von Werkzeugen für persönliches, individuelles Informationsmanagement über Unterstützungsfunktionen für Gruppen bis hin zu neuen Ansätzen im Internet der Dinge und Dienste, einschließlich der Integration verschiedener Medien und Anwendungen von Medizin bis Musik.
    Content
    Inhalt: 1. Einleitung (A. Dengel, A. Bernardi) 2. Wissensrepräsentation (A. Dengel, A. Bernardi, L. van Elst) 3. Semantische Netze, Thesauri und Topic Maps (O. Rostanin, G. Weber) 4. Das Ressource Description Framework (T. Roth-Berghofer) 5. Ontologien und Ontologie-Abgleich in verteilten Informationssystemen (L. van Elst) 6. Anfragesprachen und Reasoning (M. Sintek) 7. Linked Open Data, Semantic Web Datensätze (G.A. Grimnes, O. Hartig, M. Kiesel, M. Liwicki) 8. Semantik in der Informationsextraktion (B. Adrian, B. Endres-Niggemeyer) 9. Semantische Suche (K. Schumacher, B. Forcher, T. Tran) 10. Erklärungsfähigkeit semantischer Systeme (B. Forcher, T. Roth-Berghofer, S. Agne) 11. Semantische Webservices zur Steuerung von Prooduktionsprozessen (M. Loskyll, J. Schlick, S. Hodeck, L. Ollinger, C. Maxeiner) 12. Wissensarbeit am Desktop (S. Schwarz, H. Maus, M. Kiesel, L. Sauermann) 13. Semantische Suche für medizinische Bilder (MEDICO) (M. Möller, M. Sintek) 14. Semantische Musikempfehlungen (S. Baumann, A. Passant) 15. Optimierung von Instandhaltungsprozessen durch Semantische Technologien (P. Stephan, M. Loskyll, C. Stahl, J. Schlick)
    RSWK
    Semantic Web / Information Extraction / Suche / Wissensbasiertes System / Aufsatzsammlung
    Semantic Web / Web Services / Semantische Modellierung / Ontologie <Wissensverarbeitung> / Suche / Navigieren / Anwendungsbereich / Aufsatzsammlung
    Subject
    Semantic Web / Information Extraction / Suche / Wissensbasiertes System / Aufsatzsammlung
    Semantic Web / Web Services / Semantische Modellierung / Ontologie <Wissensverarbeitung> / Suche / Navigieren / Anwendungsbereich / Aufsatzsammlung
    Theme
    Semantic Web
  9. Gödert, W.: Facets and typed relations as tools for reasoning processes in information retrieval (2014) 0.03
    0.027487947 = product of:
      0.08246384 = sum of:
        0.036585998 = weight(_text_:web in 1565) [ClassicSimilarity], result of:
          0.036585998 = score(doc=1565,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 1565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1565)
        0.04587784 = weight(_text_:computer in 1565) [ClassicSimilarity], result of:
          0.04587784 = score(doc=1565,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 1565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1565)
      0.33333334 = coord(2/6)
    
    Abstract
    Faceted arrangement of entities and typed relations for representing different associations between the entities are established tools in knowledge representation. In this paper, a proposal is being discussed combining both tools to draw inferences along relational paths. This approach may yield new benefit for information retrieval processes, especially when modeled for heterogeneous environments in the Semantic Web. Faceted arrangement can be used as a selection tool for the semantic knowledge modeled within the knowledge representation. Typed relations between the entities of different facets can be used as restrictions for selecting them across the facets.
    Series
    Communications in computer and information science; 478
  10. Atanassova, I.; Bertin, M.: Semantic facets for scientific information retrieval (2014) 0.03
    0.027487947 = product of:
      0.08246384 = sum of:
        0.036585998 = weight(_text_:web in 4471) [ClassicSimilarity], result of:
          0.036585998 = score(doc=4471,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 4471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4471)
        0.04587784 = weight(_text_:computer in 4471) [ClassicSimilarity], result of:
          0.04587784 = score(doc=4471,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 4471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4471)
      0.33333334 = coord(2/6)
    
    Series
    Communications in computer and information science; vol.475
    Source
    Semantic Web Evaluation Challenge. SemWebEval 2014 at ESWC 2014, Anissaras, Crete, Greece, May 25-29, 2014, Revised Selected Papers. Eds.: V. Presutti et al
  11. Pfeiffer, S.: Entwicklung einer Ontologie für die wissensbasierte Erschließung des ISDC-Repository und die Visualisierung kontextrelevanter semantischer Zusammenhänge (2010) 0.03
    0.026175743 = product of:
      0.07852723 = sum of:
        0.033718713 = weight(_text_:wide in 4658) [ClassicSimilarity], result of:
          0.033718713 = score(doc=4658,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.171337 = fieldWeight in 4658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4658)
        0.04480851 = weight(_text_:web in 4658) [ClassicSimilarity], result of:
          0.04480851 = score(doc=4658,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3091247 = fieldWeight in 4658, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4658)
      0.33333334 = coord(2/6)
    
    Abstract
    In der heutigen Zeit sind Informationen jeglicher Art über das World Wide Web (WWW) für eine breite Bevölkerungsschicht zugänglich. Dabei ist es jedoch schwierig die existierenden Dokumente auch so aufzubereiten, dass die Inhalte für Maschinen inhaltlich interpretierbar sind. Das Semantic Web, eine Weiterentwicklung des WWWs, möchte dies ändern, indem es Webinhalte in maschinenverständlichen Formaten anbietet. Dadurch können Automatisierungsprozesse für die Suchanfragenoptimierung und für die Wissensbasenvernetzung eingesetzt werden. Die Web Ontology Language (OWL) ist eine mögliche Sprache, in der Wissen beschrieben und gespeichert werden kann (siehe Kapitel 4 OWL). Das Softwareprodukt Protégé unterstützt den Standard OWL, weshalb ein Großteil der Modellierungsarbeiten in Protégé durchgeführt wurde. Momentan erhält der Nutzer in den meisten Fällen bei der Informationsfindung im Internet lediglich Unterstützung durch eine von Suchmaschinenbetreibern vorgenommene Verschlagwortung des Dokumentinhaltes, d.h. Dokumente können nur nach einem bestimmten Wort oder einer bestimmten Wortgruppe durchsucht werden. Die Ausgabeliste der Suchergebnisse muss dann durch den Nutzer selbst gesichtet und nach Relevanz geordnet werden. Das kann ein sehr zeit- und arbeitsintensiver Prozess sein. Genau hier kann das Semantic Web einen erheblichen Beitrag in der Informationsaufbereitung für den Nutzer leisten, da die Ausgabe der Suchergebnisse bereits einer semantischen Überprüfung und Verknüpfung unterliegt. Deshalb fallen hier nicht relevante Informationsquellen von vornherein bei der Ausgabe heraus, was das Finden von gesuchten Dokumenten und Informationen in einem bestimmten Wissensbereich beschleunigt.
    Um die Vernetzung von Daten, Informationen und Wissen imWWWzu verbessern, werden verschiedene Ansätze verfolgt. Neben dem Semantic Web mit seinen verschiedenen Ausprägungen gibt es auch andere Ideen und Konzepte, welche die Verknüpfung von Wissen unterstützen. Foren, soziale Netzwerke und Wikis sind eine Möglichkeit des Wissensaustausches. In Wikis wird Wissen in Form von Artikeln gebündelt, um es so einer breiten Masse zur Verfügung zu stellen. Hier angebotene Informationen sollten jedoch kritisch hinterfragt werden, da die Autoren der Artikel in den meisten Fällen keine Verantwortung für die dort veröffentlichten Inhalte übernehmen müssen. Ein anderer Weg Wissen zu vernetzen bietet das Web of Linked Data. Hierbei werden strukturierte Daten des WWWs durch Verweise auf andere Datenquellen miteinander verbunden. Der Nutzer wird so im Zuge der Suche auf themenverwandte und verlinkte Datenquellen verwiesen. Die geowissenschaftlichen Metadaten mit ihren Inhalten und Beziehungen untereinander, die beim GFZ unter anderem im Information System and Data Center (ISDC) gespeichert sind, sollen als Ontologie in dieser Arbeit mit den Sprachkonstrukten von OWL modelliert werden. Diese Ontologie soll die Repräsentation und Suche von ISDC-spezifischem Domänenwissen durch die semantische Vernetzung persistenter ISDC-Metadaten entscheidend verbessern. Die in dieser Arbeit aufgezeigten Modellierungsmöglichkeiten, zunächst mit der Extensible Markup Language (XML) und später mit OWL, bilden die existierenden Metadatenbestände auf einer semantischen Ebene ab (siehe Abbildung 2). Durch die definierte Nutzung der Semantik, die in OWL vorhanden ist, kann mittels Maschinen ein Mehrwert aus den Metadaten gewonnen und dem Nutzer zur Verfügung gestellt werden. Geowissenschaftliche Informationen, Daten und Wissen können in semantische Zusammenhänge gebracht und verständlich repräsentiert werden. Unterstützende Informationen können ebenfalls problemlos in die Ontologie eingebunden werden. Dazu gehören z.B. Bilder zu den im ISDC gespeicherten Instrumenten, Plattformen oder Personen. Suchanfragen bezüglich geowissenschaftlicher Phänomene können auch ohne Expertenwissen über Zusammenhänge und Begriffe gestellt und beantwortet werden. Die Informationsrecherche und -aufbereitung gewinnt an Qualität und nutzt die existierenden Ressourcen im vollen Umfang.
  12. Allocca, C.; Aquin, M.d'; Motta, E.: Impact of using relationships between ontologies to enhance the ontology search results (2012) 0.03
    0.026011107 = product of:
      0.07803332 = sum of:
        0.045263432 = weight(_text_:web in 264) [ClassicSimilarity], result of:
          0.045263432 = score(doc=264,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 264, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=264)
        0.03276989 = weight(_text_:computer in 264) [ClassicSimilarity], result of:
          0.03276989 = score(doc=264,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=264)
      0.33333334 = coord(2/6)
    
    Abstract
    Using semantic web search engines, such as Watson, Swoogle or Sindice, to find ontologies is a complex exploratory activity. It generally requires formulating multiple queries, browsing pages of results, and assessing the returned ontologies against each other to obtain a relevant and adequate subset of ontologies for the intended use. Our hypothesis is that at least some of the difficulties related to searching ontologies stem from the lack of structure in the search results, where ontologies that are implicitly related to each other are presented as disconnected and shown on different result pages. In earlier publications we presented a software framework, Kannel, which is able to automatically detect and make explicit relationships between ontologies in large ontology repositories. In this paper, we present a study that compares the use of the Watson ontology search engine with an extension,Watson+Kannel, which provides information regarding the various relationships occurring between the result ontologies. We evaluate Watson+Kannel by demonstrating through various indicators that explicit relationships between ontologies improve users' efficiency in ontology search, thus validating our hypothesis.
    Series
    Lecture notes in computer science; 7295
    Source
    9th Extended Semantic Web Conference (ESWC), 2012-05-27/2012-05-31 in Hersonissos, Crete, Greece. Eds.: Elena Simperl et al
    Theme
    Semantic Web
  13. Boer, V. de; Wielemaker, J.; Gent, J. van; Hildebrand, M.; Isaac, A.; Ossenbruggen, J. van; Schreiber, G.: Supporting linked data production for cultural heritage institutes : the Amsterdam Museum case study (2012) 0.03
    0.026011107 = product of:
      0.07803332 = sum of:
        0.045263432 = weight(_text_:web in 265) [ClassicSimilarity], result of:
          0.045263432 = score(doc=265,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 265, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=265)
        0.03276989 = weight(_text_:computer in 265) [ClassicSimilarity], result of:
          0.03276989 = score(doc=265,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=265)
      0.33333334 = coord(2/6)
    
    Abstract
    Within the cultural heritage field, proprietary metadata and vocabularies are being transformed into public Linked Data. These efforts have mostly been at the level of large-scale aggregators such as Europeana where the original data is abstracted to a common format and schema. Although this approach ensures a level of consistency and interoperability, the richness of the original data is lost in the process. In this paper, we present a transparent and interactive methodology for ingesting, converting and linking cultural heritage metadata into Linked Data. The methodology is designed to maintain the richness and detail of the original metadata. We introduce the XMLRDF conversion tool and describe how it is integrated in the ClioPatria semantic web toolkit. The methodology and the tools have been validated by converting the Amsterdam Museum metadata to a Linked Data version. In this way, the Amsterdam Museum became the first 'small' cultural heritage institution with a node in the Linked Data cloud.
    Series
    Lecture notes in computer science; 7295
    Source
    9th Extended Semantic Web Conference (ESWC), 2012-05-27/2012-05-31 in Hersonissos, Crete, Greece. Eds.: Elena Simperl et al
    Theme
    Semantic Web
  14. Arp, R.; Smith, B.; Spear, A.D.: Building ontologies with basic formal ontology (2015) 0.02
    0.024428546 = product of:
      0.07328564 = sum of:
        0.036210746 = weight(_text_:web in 3444) [ClassicSimilarity], result of:
          0.036210746 = score(doc=3444,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.24981049 = fieldWeight in 3444, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3444)
        0.037074894 = weight(_text_:computer in 3444) [ClassicSimilarity], result of:
          0.037074894 = score(doc=3444,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.22840683 = fieldWeight in 3444, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=3444)
      0.33333334 = coord(2/6)
    
    Abstract
    In the era of "big data," science is increasingly information driven, and the potential for computers to store, manage, and integrate massive amounts of data has given rise to such new disciplinary fields as biomedical informatics. Applied ontology offers a strategy for the organization of scientific information in computer-tractable form, drawing on concepts not only from computer and information science but also from linguistics, logic, and philosophy. This book provides an introduction to the field of applied ontology that is of particular relevance to biomedicine, covering theoretical components of ontologies, best practices for ontology design, and examples of biomedical ontologies in use. After defining an ontology as a representation of the types of entities in a given domain, the book distinguishes between different kinds of ontologies and taxonomies, and shows how applied ontology draws on more traditional ideas from metaphysics. It presents the core features of the Basic Formal Ontology (BFO), now used by over one hundred ontology projects around the world, and offers examples of domain ontologies that utilize BFO. The book also describes Web Ontology Language (OWL), a common framework for Semantic Web technologies. Throughout, the book provides concrete recommendations for the design and construction of domain ontologies.
    Content
    What Is an Ontology? - Kinds of Ontologies and the Role of Taxonomies - Principles of Best Practice 1: Domain Ontology Design - Principles of Best Practice II: Terms, Definitions, and Classification - Introduction to Basic Formal Ontology I: Continuants - Introduction to Basic Formal Ontology II: Occurrents - The Ontology of Relations - Basic Formal Ontology at Work - Appendix on Implementation: Languages, Editors, Reasoners, Browsers, Tools for Reuse - Glossary - Web Links Mentioned in the Text Including Ontologies, Research Groups, Software, and Reasoning Tools
  15. Frické, M.: Logic and the organization of information (2012) 0.02
    0.02380526 = product of:
      0.07141578 = sum of:
        0.031684402 = weight(_text_:web in 1782) [ClassicSimilarity], result of:
          0.031684402 = score(doc=1782,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21858418 = fieldWeight in 1782, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.039731376 = weight(_text_:computer in 1782) [ClassicSimilarity], result of:
          0.039731376 = score(doc=1782,freq=6.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24477258 = fieldWeight in 1782, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.33333334 = coord(2/6)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
    LCSH
    Computer science
    Subject
    Computer science
  16. Nguyen, P.H.P.; Kaneiwa, K.; Nguyen, M.-Q.: Ontology inferencing rules and operations in conceptual structure theory (2010) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 4421) [ClassicSimilarity], result of:
          0.031359423 = score(doc=4421,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 4421, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4421)
        0.039323866 = weight(_text_:computer in 4421) [ClassicSimilarity], result of:
          0.039323866 = score(doc=4421,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 4421, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4421)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper describes in detail the inferencing rules and operations concerning an ontology formalism previously proposed under Conceptual Structure Theory. The ontology consists of hierarchies of concept, relation and meta-relation types, and formal relationships between them, in particular between arguments of relation and meta-relation types. Inferencing rules are described as well as operations to maintain the ontology in a semantically consistent state at all times. The main aim of the paper is to provide a blue print for the implementation of ontologies in the future Semantic Web.
    Footnote
    Preprint. To be published as Vol 122 in the Conferences in Research and Practice in Information Technology Series by the Australian Computer Society Inc. http://crpit.com/.
  17. Bold, N.; Kim, W.-J.; Yang, J.-D.: Converting object-based thesauri into XML Topic Maps (2010) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 4799) [ClassicSimilarity], result of:
          0.031359423 = score(doc=4799,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4799)
        0.039323866 = weight(_text_:computer in 4799) [ClassicSimilarity], result of:
          0.039323866 = score(doc=4799,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4799)
      0.33333334 = coord(2/6)
    
    Abstract
    Constructing ontology is considerably time consuming process in general. Since there are a vast amount of thesauri currently available, it may be a feasible solution to exploit thesauri, when constructing ontology in a short period of time. This paper designs and implements a XTM (XML Topic Maps) code converter generating XTM coded ontology from an object based thesaurus. It is an extended thesaurus, which enriches the conventional thesauri with user defined associations, a notion of instances and occurrences associated with them. The reason we adopt XTM is that it is a verified and practical methodology to semantically reorganize the conceptual structure of extant web applications with minimal effort. Moreover, since XTM is conceptually similar to our object based thesauri, recommendation and inference mechanism already developed in our system could be easily applied to the generated XTM ontology. To show that the XTM ontology is correct, we also verify it with onto pia Omnigator and Vizigator, the components of Ontopia Knowledge Suite (OKS) tool.
    Source
    2010 2nd International Conference on Education Technology and Computer (ICETC)
  18. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 3829) [ClassicSimilarity], result of:
          0.031359423 = score(doc=3829,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.039323866 = weight(_text_:computer in 3829) [ClassicSimilarity], result of:
          0.039323866 = score(doc=3829,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.33333334 = coord(2/6)
    
    Content
    Thesis submitted to the Graduate School of Natural and Applied Sciences of Middle East Technical University in partial fulfilment of the requirements for the degree of Master of science in Computer Engineering (XII, 57 S.)
    Theme
    Semantic Web
  19. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.02
    0.023242442 = product of:
      0.069727324 = sum of:
        0.036957435 = weight(_text_:web in 4705) [ClassicSimilarity], result of:
          0.036957435 = score(doc=4705,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 4705, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
        0.03276989 = weight(_text_:computer in 4705) [ClassicSimilarity], result of:
          0.03276989 = score(doc=4705,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
      0.33333334 = coord(2/6)
    
    Series
    Lecture notes in computer science; 6496
    Source
    The Semantic Web - ISWC 2010. 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. Eds.: Peter F. Patel-Schneider et al
  20. Marcondes, C.H.; Costa, L.C da.: ¬A model to represent and process scientific knowledge in biomedical articles with semantic Web technologies (2016) 0.02
    0.022436727 = product of:
      0.06731018 = sum of:
        0.052265707 = weight(_text_:web in 2829) [ClassicSimilarity], result of:
          0.052265707 = score(doc=2829,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 2829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2829)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 2829) [ClassicSimilarity], result of:
              0.030088935 = score(doc=2829,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 2829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2829)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Knowledge organization faces the challenge of managing the amount of knowledge available on the Web. Published literature in biomedical sciences is a huge source of knowledge, which can only efficiently be managed through automatic methods. The conventional channel for reporting scientific results is Web electronic publishing. Despite its advances, scientific articles are still published in print formats such as portable document format (PDF). Semantic Web and Linked Data technologies provides new opportunities for communicating, sharing, and integrating scientific knowledge that can overcome the limitations of the current print format. Here is proposed a semantic model of scholarly electronic articles in biomedical sciences that can overcome the limitations of traditional flat records formats. Scientific knowledge consists of claims made throughout article texts, especially when semantic elements such as questions, hypotheses and conclusions are stated. These elements, although having different roles, express relationships between phenomena. Once such knowledge units are extracted and represented with technologies such as RDF (Resource Description Framework) and linked data, they may be integrated in reasoning chains. Thereby, the results of scientific research can be published and shared in structured formats, enabling crawling by software agents, semantic retrieval, knowledge reuse, validation of scientific results, and identification of traces of scientific discoveries.
    Date
    12. 3.2016 13:17:22

Languages

  • e 131
  • d 21
  • f 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 120
  • el 33
  • m 15
  • x 11
  • s 6
  • r 2
  • More… Less…

Subjects