Search (113 results, page 1 of 6)

  • × theme_ss:"Semantic Web"
  1. Weller, K.: Knowledge representation in the Social Semantic Web (2010) 0.06
    0.059326146 = product of:
      0.11865229 = sum of:
        0.10515491 = weight(_text_:representation in 4515) [ClassicSimilarity], result of:
          0.10515491 = score(doc=4515,freq=18.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.53375995 = fieldWeight in 4515, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4515)
        0.013497385 = product of:
          0.040492155 = sum of:
            0.040492155 = weight(_text_:theory in 4515) [ClassicSimilarity], result of:
              0.040492155 = score(doc=4515,freq=4.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.22741209 = fieldWeight in 4515, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4515)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. This book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. This title combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
    Footnote
    Rez. in: iwp 62(2011) H.4, S.205-206 (C. Carstens): "Welche Arten der Wissensrepräsentation existieren im Web, wie ausgeprägt sind semantische Strukturen in diesem Kontext, und wie können soziale Aktivitäten im Sinne des Web 2.0 zur Strukturierung von Wissen im Web beitragen? Diesen Fragen widmet sich Wellers Buch mit dem Titel Knowledge Representation in the Social Semantic Web. Der Begriff Social Semantic Web spielt einerseits auf die semantische Strukturierung von Daten im Sinne des Semantic Web an und deutet andererseits auf die zunehmend kollaborative Inhaltserstellung im Social Web hin. Weller greift die Entwicklungen in diesen beiden Bereichen auf und beleuchtet die Möglichkeiten und Herausforderungen, die aus der Kombination der Aktivitäten im Semantic Web und im Social Web entstehen. Der Fokus des Buches liegt dabei primär auf den konzeptuellen Herausforderungen, die sich in diesem Kontext ergeben. So strebt die originäre Vision des Semantic Web die Annotation aller Webinhalte mit ausdrucksstarken, hochformalisierten Ontologien an. Im Social Web hingegen werden große Mengen an Daten von Nutzern erstellt, die häufig mithilfe von unkontrollierten Tags in Folksonomies annotiert werden. Weller sieht in derartigen kollaborativ erstellten Inhalten und Annotationen großes Potenzial für die semantische Indexierung, eine wichtige Voraussetzung für das Retrieval im Web. Das Hauptinteresse des Buches besteht daher darin, eine Brücke zwischen den Wissensrepräsentations-Methoden im Social Web und im Semantic Web zu schlagen. Um dieser Fragestellung nachzugehen, gliedert sich das Buch in drei Teile. . . .
    LCSH
    Knowledge representation (Information theory)
    Subject
    Knowledge representation (Information theory)
  2. Handbook on ontologies (2004) 0.05
    0.04904192 = product of:
      0.09808384 = sum of:
        0.070815004 = weight(_text_:representation in 1952) [ClassicSimilarity], result of:
          0.070815004 = score(doc=1952,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35945266 = fieldWeight in 1952, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1952)
        0.027268838 = product of:
          0.08180651 = sum of:
            0.08180651 = weight(_text_:theory in 1952) [ClassicSimilarity], result of:
              0.08180651 = score(doc=1952,freq=8.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.4594418 = fieldWeight in 1952, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1952)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    LCSH
    Knowledge representation (Information theory)
    Conceptual structures (Information theory)
    Subject
    Knowledge representation (Information theory)
    Conceptual structures (Information theory)
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.04
    0.042698573 = product of:
      0.08539715 = sum of:
        0.04533813 = product of:
          0.13601439 = sum of:
            0.13601439 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13601439 = score(doc=701,freq=2.0), product of:
                0.36301607 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042818543 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.040059015 = weight(_text_:representation in 701) [ClassicSimilarity], result of:
          0.040059015 = score(doc=701,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.20333713 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Padmavathi, T.; Krishnamurthy, M.: Semantic Web tools and techniques for knowledge organization : an overview (2017) 0.04
    0.041881282 = product of:
      0.083762564 = sum of:
        0.07010327 = weight(_text_:representation in 3618) [ClassicSimilarity], result of:
          0.07010327 = score(doc=3618,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 3618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3618)
        0.013659291 = product of:
          0.040977873 = sum of:
            0.040977873 = weight(_text_:29 in 3618) [ClassicSimilarity], result of:
              0.040977873 = score(doc=3618,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27205724 = fieldWeight in 3618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3618)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The enormous amount of information generated every day and spread across the web is diversified in nature far beyond human consumption. To overcome this difficulty, the transformation of current unstructured information into a structured form called a "Semantic Web" was proposed by Tim Berners-Lee in 1989 to enable computers to understand and interpret the information they store. The aim of the semantic web is the integration of heterogeneous and distributed data spread across the web for knowledge discovery. The core of sematic web technologies includes knowledge representation languages RDF and OWL, ontology editors and reasoning tools, and ontology query languages such as SPARQL have also been discussed.
    Date
    29. 9.2017 18:30:57
  5. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.04
    0.040285822 = product of:
      0.080571644 = sum of:
        0.070815004 = weight(_text_:representation in 2192) [ClassicSimilarity], result of:
          0.070815004 = score(doc=2192,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35945266 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.009756638 = product of:
          0.029269911 = sum of:
            0.029269911 = weight(_text_:29 in 2192) [ClassicSimilarity], result of:
              0.029269911 = score(doc=2192,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.19432661 = fieldWeight in 2192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
  6. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.04
    0.038224913 = product of:
      0.07644983 = sum of:
        0.060088523 = weight(_text_:representation in 761) [ClassicSimilarity], result of:
          0.060088523 = score(doc=761,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
        0.016361302 = product of:
          0.049083903 = sum of:
            0.049083903 = weight(_text_:theory in 761) [ClassicSimilarity], result of:
              0.049083903 = score(doc=761,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27566507 = fieldWeight in 761, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=761)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  7. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.04
    0.035898242 = product of:
      0.071796484 = sum of:
        0.060088523 = weight(_text_:representation in 4640) [ClassicSimilarity], result of:
          0.060088523 = score(doc=4640,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.011707964 = product of:
          0.035123892 = sum of:
            0.035123892 = weight(_text_:29 in 4640) [ClassicSimilarity], result of:
              0.035123892 = score(doc=4640,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23319192 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Date
    29. 7.2011 14:44:56
  8. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.04
    0.03584558 = product of:
      0.07169116 = sum of:
        0.060088523 = weight(_text_:representation in 2418) [ClassicSimilarity], result of:
          0.060088523 = score(doc=2418,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.011602643 = product of:
          0.034807928 = sum of:
            0.034807928 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.034807928 = score(doc=2418,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  9. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.03
    0.032894447 = product of:
      0.065788895 = sum of:
        0.035407502 = weight(_text_:representation in 150) [ClassicSimilarity], result of:
          0.035407502 = score(doc=150,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.17972633 = fieldWeight in 150, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.030381393 = product of:
          0.045572087 = sum of:
            0.020451628 = weight(_text_:theory in 150) [ClassicSimilarity], result of:
              0.020451628 = score(doc=150,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.11486045 = fieldWeight in 150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
            0.025120461 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.025120461 = score(doc=150,freq=6.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
  10. Legg, C.: Ontologies on the Semantic Web (2007) 0.03
    0.032224502 = product of:
      0.064449005 = sum of:
        0.040059015 = weight(_text_:representation in 1979) [ClassicSimilarity], result of:
          0.040059015 = score(doc=1979,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.20333713 = fieldWeight in 1979, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=1979)
        0.02438999 = product of:
          0.07316997 = sum of:
            0.07316997 = weight(_text_:theory in 1979) [ClassicSimilarity], result of:
              0.07316997 = score(doc=1979,freq=10.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.41093725 = fieldWeight in 1979, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1979)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The "Semantic Web" is touted by its developers as equally revolutionary, although it has not yet achieved anything like the Web's exponential uptake. It seeks to transcend a current limitation of the Web - that it largely requires indexing to be accomplished merely on specific character strings. Thus, a person searching for information about "turkey" (the bird) receives from current search engines many irrelevant pages about "Turkey" (the country) and nothing about the Spanish "pavo" even if he or she is a Spanish-speaker able to understand such pages. The Semantic Web vision is to develop technology to facilitate retrieval of information via meanings, not just spellings. For this to be possible, most commentators believe, Semantic Web applications will have to draw on some kind of shared, structured, machine-readable conceptual scheme. Thus, there has been a convergence between the Semantic Web research community and an older tradition with roots in classical Artificial Intelligence (AI) research (sometimes referred to as "knowledge representation") whose goal is to develop a formal ontology. A formal ontology is a machine-readable theory of the most fundamental concepts or "categories" required in order to understand information pertaining to any knowledge domain. A review of the attempts that have been made to realize this goal provides an opportunity to reflect in interestingly concrete ways on various research questions such as the following: - How explicit a machine-understandable theory of meaning is it possible or practical to construct? - How universal a machine-understandable theory of meaning is it possible or practical to construct? - How much (and what kind of) inference support is required to realize a machine-understandable theory of meaning? - What is it for a theory of meaning to be machine-understandable anyway?
  11. Kaminski, R.; Schaub, T.; Wanko, P.: ¬A tutorial on hybrid answer set solving with clingo (2017) 0.03
    0.031854097 = product of:
      0.06370819 = sum of:
        0.050073773 = weight(_text_:representation in 3937) [ClassicSimilarity], result of:
          0.050073773 = score(doc=3937,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.25417143 = fieldWeight in 3937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3937)
        0.013634419 = product of:
          0.040903255 = sum of:
            0.040903255 = weight(_text_:theory in 3937) [ClassicSimilarity], result of:
              0.040903255 = score(doc=3937,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.2297209 = fieldWeight in 3937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3937)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Answer Set Programming (ASP) has become an established paradigm for Knowledge Representation and Reasoning, in particular, when it comes to solving knowledge-intense combinatorial (optimization) problems. ASP's unique pairing of a simple yet rich modeling language with highly performant solving technology has led to an increasing interest in ASP in academia as well as industry. To further boost this development and make ASP fit for real world applications it is indispensable to equip it with means for an easy integration into software environments and for adding complementary forms of reasoning. In this tutorial, we describe how both issues are addressed in the ASP system clingo. At first, we outline features of clingo's application programming interface (API) that are essential for multi-shot ASP solving, a technique for dealing with continuously changing logic programs. This is illustrated by realizing two exemplary reasoning modes, namely branch-and-bound-based optimization and incremental ASP solving. We then switch to the design of the API for integrating complementary forms of reasoning and detail this in an extensive case study dealing with the integration of difference constraints. We show how the syntax of these constraints is added to the modeling language and seamlessly merged into the grounding process. We then develop in detail a corresponding theory propagator for difference constraints and present how it is integrated into clingo's solving process.
  12. McGuinness, D.L.: Ontologies come of age (2003) 0.03
    0.029915206 = product of:
      0.059830412 = sum of:
        0.050073773 = weight(_text_:representation in 3084) [ClassicSimilarity], result of:
          0.050073773 = score(doc=3084,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.25417143 = fieldWeight in 3084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3084)
        0.009756638 = product of:
          0.029269911 = sum of:
            0.029269911 = weight(_text_:29 in 3084) [ClassicSimilarity], result of:
              0.029269911 = score(doc=3084,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.19432661 = fieldWeight in 3084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3084)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies have moved beyond the domains of library science, philosophy, and knowledge representation. They are now the concerns of marketing departments, CEOs, and mainstream business. Research analyst companies such as Forrester Research report on the critical roles of ontologies in support of browsing and search for e-commerce and in support of interoperability for facilitation of knowledge management and configuration. One now sees ontologies used as central controlled vocabularies that are integrated into catalogues, databases, web publications, knowledge management applications, etc. Large ontologies are essential components in many online applications including search (such as Yahoo and Lycos), e-commerce (such as Amazon and eBay), configuration (such as Dell and PC-Order), etc. One also sees ontologies that have long life spans, sometimes in multiple projects (such as UMLS, SIC codes, etc.). Such diverse usage generates many implications for ontology environments. In this paper, we will discuss ontologies and requirements in their current instantiations on the web today. We will describe some desirable properties of ontologies. We will also discuss how both simple and complex ontologies are being and may be used to support varied applications. We will conclude with a discussion of emerging trends in ontologies and their environments and briefly mention our evolving ontology evolution environment.
    Date
    29. 3.1996 18:16:49
  13. Waltinger, U.; Mehler, A.; Lösch, M.; Horstmann, W.: Hierarchical classification of OAI metadata using the DDC taxonomy (2011) 0.03
    0.029915206 = product of:
      0.059830412 = sum of:
        0.050073773 = weight(_text_:representation in 4841) [ClassicSimilarity], result of:
          0.050073773 = score(doc=4841,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.25417143 = fieldWeight in 4841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4841)
        0.009756638 = product of:
          0.029269911 = sum of:
            0.029269911 = weight(_text_:29 in 4841) [ClassicSimilarity], result of:
              0.029269911 = score(doc=4841,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.19432661 = fieldWeight in 4841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4841)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In the area of digital library services, the access to subject-specific metadata of scholarly publications is of utmost interest. One of the most prevalent approaches for metadata exchange is the XML-based Open Archive Initiative (OAI) Protocol for Metadata Harvesting (OAI-PMH). However, due to its loose requirements regarding metadata content there is no strict standard for consistent subject indexing specified, which is furthermore needed in the digital library domain. This contribution addresses the problem of automatic enhancement of OAI metadata by means of the most widely used universal classification schemes in libraries-the Dewey Decimal Classification (DDC). To be more specific, we automatically classify scientific documents according to the DDC taxonomy within three levels using a machine learning-based classifier that relies solely on OAI metadata records as the document representation. The results show an asymmetric distribution of documents across the hierarchical structure of the DDC taxonomy and issues of data sparseness. However, the performance of the classifier shows promising results on all three levels of the DDC.
    Pages
    S.29-40
  14. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.03
    0.029871322 = product of:
      0.059742644 = sum of:
        0.050073773 = weight(_text_:representation in 4553) [ClassicSimilarity], result of:
          0.050073773 = score(doc=4553,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.25417143 = fieldWeight in 4553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.00966887 = product of:
          0.02900661 = sum of:
            0.02900661 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.02900661 = score(doc=4553,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  15. Engels, R.H.P.; Lech, T.Ch.: Generating ontologies for the Semantic Web : OntoBuilder (2004) 0.03
    0.025483275 = product of:
      0.05096655 = sum of:
        0.040059015 = weight(_text_:representation in 4404) [ClassicSimilarity], result of:
          0.040059015 = score(doc=4404,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.20333713 = fieldWeight in 4404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=4404)
        0.0109075345 = product of:
          0.032722604 = sum of:
            0.032722604 = weight(_text_:theory in 4404) [ClassicSimilarity], result of:
              0.032722604 = score(doc=4404,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.18377672 = fieldWeight in 4404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4404)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Significant progress has been made in technologies for publishing and distributing knowledge and information on the web. However, much of the published information is not organized, and it is hard to find answers to questions that require more than a keyword search. In general, one can say that the web is organizing itself. Information is often published in relatively ad hoc fashion. Typically, concern about the presentation of content has been limited to purely layout issues. This, combined with the fact that the representation language used on the World Wide Web (HTML) is mainly format-oriented, makes publishing on the WWW easy, giving it an enormous expressiveness. People add private, educational or organizational content to the web that is of an immensely diverse nature. Content on the web is growing closer to a real universal knowledge base, with one problem relatively undefined; the problem of the interpretation of its contents. Although widely acknowledged for its general and universal advantages, the increasing popularity of the web also shows us some major drawbacks. The developments of the information content on the web during the last year alone, clearly indicates the need for some changes. Perhaps one of the most significant problems with the web as a distributed information system is the difficulty of finding and comparing information.
    Thus, there is a clear need for the web to become more semantic. The aim of introducing semantics into the web is to enhance the precision of search, but also enable the use of logical reasoning on web contents in order to answer queries. The CORPORUM OntoBuilder toolset is developed specifically for this task. It consists of a set of applications that can fulfil a variety of tasks, either as stand-alone tools, or augmenting each other. Important tasks that are dealt with by CORPORUM are related to document and information retrieval (find relevant documents, or support the user finding them), as well as information extraction (building a knowledge base from web documents to answer queries), information dissemination (summarizing strategies and information visualization), and automated document classification strategies. First versions of the toolset are encouraging in that they show large potential as a supportive technology for building up the Semantic Web. In this chapter, methods for transforming the current web into a semantic web are discussed, as well as a technical solution that can perform this task: the CORPORUM tool set. First, the toolset is introduced; followed by some pragmatic issues relating to the approach; then there will be a short overview of the theory in relation to CognIT's vision; and finally, a discussion on some of the applications that arose from the project.
  16. Chaudhury, S.; Mallik, A.; Ghosh, H.: Multimedia ontology : representation and applications (2016) 0.03
    0.025036886 = product of:
      0.100147545 = sum of:
        0.100147545 = weight(_text_:representation in 2801) [ClassicSimilarity], result of:
          0.100147545 = score(doc=2801,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.50834286 = fieldWeight in 2801, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2801)
      0.25 = coord(1/4)
    
    Abstract
    The book covers multimedia ontology in heritage preservation with intellectual explorations of various themes of Indian cultural heritage. The result of more than 15 years of collective research, Multimedia Ontology: Representation and Applications provides a theoretical foundation for understanding the nature of media data and the principles involved in its interpretation. The book presents a unified approach to recent advances in multimedia and explains how a multimedia ontology can fill the semantic gap between concepts and the media world. It relays real-life examples of implementations in different domains to illustrate how this gap can be filled. The book contains information that helps with building semantic, content-based search and retrieval engines and also with developing vertical application-specific search applications. It guides you in designing multimedia tools that aid in logical and conceptual organization of large amounts of multimedia data. As a practical demonstration, it showcases multimedia applications in cultural heritage preservation efforts and the creation of virtual museums. The book describes the limitations of existing ontology techniques in semantic multimedia data processing, as well as some open problems in the representations and applications of multimedia ontology. As an antidote, it introduces new ontology representation and reasoning schemes that overcome these limitations. The long, compiled efforts reflected in Multimedia Ontology: Representation and Applications are a signpost for new achievements and developments in efficiency and accessibility in the field.
  17. Knowledge graphs : new directions for knowledge representation on the Semantic Web (2019) 0.02
    0.02168258 = product of:
      0.08673032 = sum of:
        0.08673032 = weight(_text_:representation in 51) [ClassicSimilarity], result of:
          0.08673032 = score(doc=51,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.44023782 = fieldWeight in 51, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=51)
      0.25 = coord(1/4)
    
    Abstract
    The increasingly pervasive nature of the Web, expanding to devices and things in everydaylife, along with new trends in Artificial Intelligence call for new paradigms and a new look onKnowledge Representation and Processing at scale for the Semantic Web. The emerging, but stillto be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphorfor this current status of Semantic Web research. More than two decades of Semantic Webresearch provides a solid basis and a promising technology and standards stack to interlink data,ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphsas such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises- while often inspired by - limited to the core Semantic Web stack. This report documents theprogram and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions forKnowledge Representation on the Semantic Web", where a group of experts from academia andindustry discussed fundamental questions around these topics for a week in early September 2018,including the following: what are knowledge graphs? Which applications do we see to emerge?Which open research questions still need be addressed and which technology gaps still need tobe closed?
  18. Binding, C.; Gnoli, C.; Tudhope, D.: Migrating a complex classification scheme to the semantic web : expressing the Integrative Levels Classification using SKOS RDF (2021) 0.02
    0.02168258 = product of:
      0.08673032 = sum of:
        0.08673032 = weight(_text_:representation in 600) [ClassicSimilarity], result of:
          0.08673032 = score(doc=600,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.44023782 = fieldWeight in 600, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The Integrative Levels Classification (ILC) is a comprehensive "freely faceted" knowledge organization system not previously expressed as SKOS (Simple Knowledge Organization System). This paper reports and reflects on work converting the ILC to SKOS representation. Design/methodology/approach The design of the ILC representation and the various steps in the conversion to SKOS are described and located within the context of previous work considering the representation of complex classification schemes in SKOS. Various issues and trade-offs emerging from the conversion are discussed. The conversion implementation employed the STELETO transformation tool. Findings The ILC conversion captures some of the ILC facet structure by a limited extension beyond the SKOS standard. SPARQL examples illustrate how this extension could be used to create faceted, compound descriptors when indexing or cataloguing. Basic query patterns are provided that might underpin search systems. Possible routes for reducing complexity are discussed. Originality/value Complex classification schemes, such as the ILC, have features which are not straight forward to represent in SKOS and which extend beyond the functionality of the SKOS standard. The ILC's facet indicators are modelled as rdf:Property sub-hierarchies that accompany the SKOS RDF statements. The ILC's top-level fundamental facet relationships are modelled by extensions of the associative relationship - specialised sub-properties of skos:related. An approach for representing faceted compound descriptions in ILC and other faceted classification schemes is proposed.
  19. Rüther, M.; Fock, J.; Schultz-Krutisch, T.; Bandholtz, T.: Classification and reference vocabulary in linked environment data (2011) 0.02
    0.0212445 = product of:
      0.084978 = sum of:
        0.084978 = weight(_text_:representation in 4816) [ClassicSimilarity], result of:
          0.084978 = score(doc=4816,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.4313432 = fieldWeight in 4816, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4816)
      0.25 = coord(1/4)
    
    Abstract
    The Federal Environment Agency (UBA), Germany, has a long tradition in knowledge organization, using a library along with many Web-based information systems. The backbone of this information space is a classification system enhanced by a reference vocabulary which consists of a thesaurus, a gazetteer and a chronicle. Over the years, classification has increasingly been relegated to the background compared with the reference vocabulary indexing and full text search. Bibliographic items are no longer classified directly but tagged with thesaurus terms, with those terms being classified. Since 2010 we have been developing a linked data representation of this knowledge base. While we are linking bibliographic and observation data with the controlled vocabulary in a Resource Desrcription Framework (RDF) representation, the classification may be revisited as a powerful organization system by inference. This also raises questions about the quality and feasibility of an unambiguous classification of thesaurus terms.
  20. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.02
    0.020940641 = product of:
      0.041881282 = sum of:
        0.035051636 = weight(_text_:representation in 1981) [ClassicSimilarity], result of:
          0.035051636 = score(doc=1981,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.17791998 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.0068296455 = product of:
          0.020488936 = sum of:
            0.020488936 = weight(_text_:29 in 1981) [ClassicSimilarity], result of:
              0.020488936 = score(doc=1981,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.13602862 = fieldWeight in 1981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1981)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
    Date
    29. 3.1996 18:16:49

Years

Languages

  • e 98
  • d 15

Types

  • a 68
  • el 26
  • m 26
  • s 14
  • n 2
  • r 1
  • x 1
  • More… Less…

Subjects

Classifications