Search (107 results, page 1 of 6)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.0581154 = product of:
      0.20340389 = sum of:
        0.019984502 = product of:
          0.07993801 = sum of:
            0.07993801 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.07993801 = score(doc=701,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
        0.07993801 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.07993801 = score(doc=701,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.07993801 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.07993801 = score(doc=701,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.023543375 = weight(_text_:representation in 701) [ClassicSimilarity], result of:
          0.023543375 = score(doc=701,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.2857143 = coord(4/14)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Padmavathi, T.; Krishnamurthy, M.: Semantic Web tools and techniques for knowledge organization : an overview (2017) 0.01
    0.0070326724 = product of:
      0.049228705 = sum of:
        0.041200902 = weight(_text_:representation in 3618) [ClassicSimilarity], result of:
          0.041200902 = score(doc=3618,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 3618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3618)
        0.008027801 = product of:
          0.024083402 = sum of:
            0.024083402 = weight(_text_:29 in 3618) [ClassicSimilarity], result of:
              0.024083402 = score(doc=3618,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.27205724 = fieldWeight in 3618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3618)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    The enormous amount of information generated every day and spread across the web is diversified in nature far beyond human consumption. To overcome this difficulty, the transformation of current unstructured information into a structured form called a "Semantic Web" was proposed by Tim Berners-Lee in 1989 to enable computers to understand and interpret the information they store. The aim of the semantic web is the integration of heterogeneous and distributed data spread across the web for knowledge discovery. The core of sematic web technologies includes knowledge representation languages RDF and OWL, ontology editors and reasoning tools, and ontology query languages such as SPARQL have also been discussed.
    Date
    29. 9.2017 18:30:57
  3. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.01
    0.0067647635 = product of:
      0.047353342 = sum of:
        0.041619197 = weight(_text_:representation in 2192) [ClassicSimilarity], result of:
          0.041619197 = score(doc=2192,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35945266 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.005734144 = product of:
          0.017202431 = sum of:
            0.017202431 = weight(_text_:29 in 2192) [ClassicSimilarity], result of:
              0.017202431 = score(doc=2192,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.19432661 = fieldWeight in 2192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
  4. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.01
    0.006028005 = product of:
      0.04219603 = sum of:
        0.03531506 = weight(_text_:representation in 4640) [ClassicSimilarity], result of:
          0.03531506 = score(doc=4640,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.006880972 = product of:
          0.020642916 = sum of:
            0.020642916 = weight(_text_:29 in 4640) [ClassicSimilarity], result of:
              0.020642916 = score(doc=4640,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23319192 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Date
    29. 7.2011 14:44:56
  5. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.01
    0.006019162 = product of:
      0.042134132 = sum of:
        0.03531506 = weight(_text_:representation in 2418) [ClassicSimilarity], result of:
          0.03531506 = score(doc=2418,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.006819073 = product of:
          0.02045722 = sum of:
            0.02045722 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.02045722 = score(doc=2418,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  6. McGuinness, D.L.: Ontologies come of age (2003) 0.01
    0.005023338 = product of:
      0.035163365 = sum of:
        0.02942922 = weight(_text_:representation in 3084) [ClassicSimilarity], result of:
          0.02942922 = score(doc=3084,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.25417143 = fieldWeight in 3084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3084)
        0.005734144 = product of:
          0.017202431 = sum of:
            0.017202431 = weight(_text_:29 in 3084) [ClassicSimilarity], result of:
              0.017202431 = score(doc=3084,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.19432661 = fieldWeight in 3084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3084)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Ontologies have moved beyond the domains of library science, philosophy, and knowledge representation. They are now the concerns of marketing departments, CEOs, and mainstream business. Research analyst companies such as Forrester Research report on the critical roles of ontologies in support of browsing and search for e-commerce and in support of interoperability for facilitation of knowledge management and configuration. One now sees ontologies used as central controlled vocabularies that are integrated into catalogues, databases, web publications, knowledge management applications, etc. Large ontologies are essential components in many online applications including search (such as Yahoo and Lycos), e-commerce (such as Amazon and eBay), configuration (such as Dell and PC-Order), etc. One also sees ontologies that have long life spans, sometimes in multiple projects (such as UMLS, SIC codes, etc.). Such diverse usage generates many implications for ontology environments. In this paper, we will discuss ontologies and requirements in their current instantiations on the web today. We will describe some desirable properties of ontologies. We will also discuss how both simple and complex ontologies are being and may be used to support varied applications. We will conclude with a discussion of emerging trends in ontologies and their environments and briefly mention our evolving ontology evolution environment.
    Date
    29. 3.1996 18:16:49
  7. Waltinger, U.; Mehler, A.; Lösch, M.; Horstmann, W.: Hierarchical classification of OAI metadata using the DDC taxonomy (2011) 0.01
    0.005023338 = product of:
      0.035163365 = sum of:
        0.02942922 = weight(_text_:representation in 4841) [ClassicSimilarity], result of:
          0.02942922 = score(doc=4841,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.25417143 = fieldWeight in 4841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4841)
        0.005734144 = product of:
          0.017202431 = sum of:
            0.017202431 = weight(_text_:29 in 4841) [ClassicSimilarity], result of:
              0.017202431 = score(doc=4841,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.19432661 = fieldWeight in 4841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4841)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    In the area of digital library services, the access to subject-specific metadata of scholarly publications is of utmost interest. One of the most prevalent approaches for metadata exchange is the XML-based Open Archive Initiative (OAI) Protocol for Metadata Harvesting (OAI-PMH). However, due to its loose requirements regarding metadata content there is no strict standard for consistent subject indexing specified, which is furthermore needed in the digital library domain. This contribution addresses the problem of automatic enhancement of OAI metadata by means of the most widely used universal classification schemes in libraries-the Dewey Decimal Classification (DDC). To be more specific, we automatically classify scientific documents according to the DDC taxonomy within three levels using a machine learning-based classifier that relies solely on OAI metadata records as the document representation. The results show an asymmetric distribution of documents across the hierarchical structure of the DDC taxonomy and issues of data sparseness. However, the performance of the classifier shows promising results on all three levels of the DDC.
    Pages
    S.29-40
  8. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.005015969 = product of:
      0.03511178 = sum of:
        0.02942922 = weight(_text_:representation in 4553) [ClassicSimilarity], result of:
          0.02942922 = score(doc=4553,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.25417143 = fieldWeight in 4553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.0056825615 = product of:
          0.017047685 = sum of:
            0.017047685 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.017047685 = score(doc=4553,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  9. Weller, K.: Knowledge representation in the Social Semantic Web (2010) 0.00
    0.004414383 = product of:
      0.061801355 = sum of:
        0.061801355 = weight(_text_:representation in 4515) [ClassicSimilarity], result of:
          0.061801355 = score(doc=4515,freq=18.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.53375995 = fieldWeight in 4515, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4515)
      0.071428575 = coord(1/14)
    
    Abstract
    The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. This book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. This title combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
    Footnote
    Rez. in: iwp 62(2011) H.4, S.205-206 (C. Carstens): "Welche Arten der Wissensrepräsentation existieren im Web, wie ausgeprägt sind semantische Strukturen in diesem Kontext, und wie können soziale Aktivitäten im Sinne des Web 2.0 zur Strukturierung von Wissen im Web beitragen? Diesen Fragen widmet sich Wellers Buch mit dem Titel Knowledge Representation in the Social Semantic Web. Der Begriff Social Semantic Web spielt einerseits auf die semantische Strukturierung von Daten im Sinne des Semantic Web an und deutet andererseits auf die zunehmend kollaborative Inhaltserstellung im Social Web hin. Weller greift die Entwicklungen in diesen beiden Bereichen auf und beleuchtet die Möglichkeiten und Herausforderungen, die aus der Kombination der Aktivitäten im Semantic Web und im Social Web entstehen. Der Fokus des Buches liegt dabei primär auf den konzeptuellen Herausforderungen, die sich in diesem Kontext ergeben. So strebt die originäre Vision des Semantic Web die Annotation aller Webinhalte mit ausdrucksstarken, hochformalisierten Ontologien an. Im Social Web hingegen werden große Mengen an Daten von Nutzern erstellt, die häufig mithilfe von unkontrollierten Tags in Folksonomies annotiert werden. Weller sieht in derartigen kollaborativ erstellten Inhalten und Annotationen großes Potenzial für die semantische Indexierung, eine wichtige Voraussetzung für das Retrieval im Web. Das Hauptinteresse des Buches besteht daher darin, eine Brücke zwischen den Wissensrepräsentations-Methoden im Social Web und im Semantic Web zu schlagen. Um dieser Fragestellung nachzugehen, gliedert sich das Buch in drei Teile. . . .
    LCSH
    Knowledge representation (Information theory)
    Subject
    Knowledge representation (Information theory)
  10. Chaudhury, S.; Mallik, A.; Ghosh, H.: Multimedia ontology : representation and applications (2016) 0.00
    0.0042041745 = product of:
      0.05885844 = sum of:
        0.05885844 = weight(_text_:representation in 2801) [ClassicSimilarity], result of:
          0.05885844 = score(doc=2801,freq=8.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.50834286 = fieldWeight in 2801, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2801)
      0.071428575 = coord(1/14)
    
    Abstract
    The book covers multimedia ontology in heritage preservation with intellectual explorations of various themes of Indian cultural heritage. The result of more than 15 years of collective research, Multimedia Ontology: Representation and Applications provides a theoretical foundation for understanding the nature of media data and the principles involved in its interpretation. The book presents a unified approach to recent advances in multimedia and explains how a multimedia ontology can fill the semantic gap between concepts and the media world. It relays real-life examples of implementations in different domains to illustrate how this gap can be filled. The book contains information that helps with building semantic, content-based search and retrieval engines and also with developing vertical application-specific search applications. It guides you in designing multimedia tools that aid in logical and conceptual organization of large amounts of multimedia data. As a practical demonstration, it showcases multimedia applications in cultural heritage preservation efforts and the creation of virtual museums. The book describes the limitations of existing ontology techniques in semantic multimedia data processing, as well as some open problems in the representations and applications of multimedia ontology. As an antidote, it introduces new ontology representation and reasoning schemes that overcome these limitations. The long, compiled efforts reflected in Multimedia Ontology: Representation and Applications are a signpost for new achievements and developments in efficiency and accessibility in the field.
  11. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.00
    0.0036758345 = product of:
      0.02573084 = sum of:
        0.020809598 = weight(_text_:representation in 150) [ClassicSimilarity], result of:
          0.020809598 = score(doc=150,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.17972633 = fieldWeight in 150, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.0049212426 = product of:
          0.014763728 = sum of:
            0.014763728 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.014763728 = score(doc=150,freq=6.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
  12. Knowledge graphs : new directions for knowledge representation on the Semantic Web (2019) 0.00
    0.0036409218 = product of:
      0.0509729 = sum of:
        0.0509729 = weight(_text_:representation in 51) [ClassicSimilarity], result of:
          0.0509729 = score(doc=51,freq=6.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.44023782 = fieldWeight in 51, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=51)
      0.071428575 = coord(1/14)
    
    Abstract
    The increasingly pervasive nature of the Web, expanding to devices and things in everydaylife, along with new trends in Artificial Intelligence call for new paradigms and a new look onKnowledge Representation and Processing at scale for the Semantic Web. The emerging, but stillto be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphorfor this current status of Semantic Web research. More than two decades of Semantic Webresearch provides a solid basis and a promising technology and standards stack to interlink data,ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphsas such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises- while often inspired by - limited to the core Semantic Web stack. This report documents theprogram and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions forKnowledge Representation on the Semantic Web", where a group of experts from academia andindustry discussed fundamental questions around these topics for a week in early September 2018,including the following: what are knowledge graphs? Which applications do we see to emerge?Which open research questions still need be addressed and which technology gaps still need tobe closed?
  13. Binding, C.; Gnoli, C.; Tudhope, D.: Migrating a complex classification scheme to the semantic web : expressing the Integrative Levels Classification using SKOS RDF (2021) 0.00
    0.0036409218 = product of:
      0.0509729 = sum of:
        0.0509729 = weight(_text_:representation in 600) [ClassicSimilarity], result of:
          0.0509729 = score(doc=600,freq=6.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.44023782 = fieldWeight in 600, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
      0.071428575 = coord(1/14)
    
    Abstract
    Purpose The Integrative Levels Classification (ILC) is a comprehensive "freely faceted" knowledge organization system not previously expressed as SKOS (Simple Knowledge Organization System). This paper reports and reflects on work converting the ILC to SKOS representation. Design/methodology/approach The design of the ILC representation and the various steps in the conversion to SKOS are described and located within the context of previous work considering the representation of complex classification schemes in SKOS. Various issues and trade-offs emerging from the conversion are discussed. The conversion implementation employed the STELETO transformation tool. Findings The ILC conversion captures some of the ILC facet structure by a limited extension beyond the SKOS standard. SPARQL examples illustrate how this extension could be used to create faceted, compound descriptors when indexing or cataloguing. Basic query patterns are provided that might underpin search systems. Possible routes for reducing complexity are discussed. Originality/value Complex classification schemes, such as the ILC, have features which are not straight forward to represent in SKOS and which extend beyond the functionality of the SKOS standard. The ILC's facet indicators are modelled as rdf:Property sub-hierarchies that accompany the SKOS RDF statements. The ILC's top-level fundamental facet relationships are modelled by extensions of the associative relationship - specialised sub-properties of skos:related. An approach for representing faceted compound descriptions in ILC and other faceted classification schemes is proposed.
  14. Rüther, M.; Fock, J.; Schultz-Krutisch, T.; Bandholtz, T.: Classification and reference vocabulary in linked environment data (2011) 0.00
    0.0035673599 = product of:
      0.049943037 = sum of:
        0.049943037 = weight(_text_:representation in 4816) [ClassicSimilarity], result of:
          0.049943037 = score(doc=4816,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.4313432 = fieldWeight in 4816, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4816)
      0.071428575 = coord(1/14)
    
    Abstract
    The Federal Environment Agency (UBA), Germany, has a long tradition in knowledge organization, using a library along with many Web-based information systems. The backbone of this information space is a classification system enhanced by a reference vocabulary which consists of a thesaurus, a gazetteer and a chronicle. Over the years, classification has increasingly been relegated to the background compared with the reference vocabulary indexing and full text search. Bibliographic items are no longer classified directly but tagged with thesaurus terms, with those terms being classified. Since 2010 we have been developing a linked data representation of this knowledge base. While we are linking bibliographic and observation data with the controlled vocabulary in a Resource Desrcription Framework (RDF) representation, the classification may be revisited as a powerful organization system by inference. This also raises questions about the quality and feasibility of an unambiguous classification of thesaurus terms.
  15. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.00
    0.0035163362 = product of:
      0.024614353 = sum of:
        0.020600451 = weight(_text_:representation in 1981) [ClassicSimilarity], result of:
          0.020600451 = score(doc=1981,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.17791998 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.0040139006 = product of:
          0.012041701 = sum of:
            0.012041701 = weight(_text_:29 in 1981) [ClassicSimilarity], result of:
              0.012041701 = score(doc=1981,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.13602862 = fieldWeight in 1981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1981)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
    Date
    29. 3.1996 18:16:49
  16. Prasad, A.R.D.; Madalli, D.P.: Faceted infrastructure for semantic digital libraries (2008) 0.00
    0.0029727998 = product of:
      0.041619197 = sum of:
        0.041619197 = weight(_text_:representation in 1905) [ClassicSimilarity], result of:
          0.041619197 = score(doc=1905,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35945266 = fieldWeight in 1905, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1905)
      0.071428575 = coord(1/14)
    
    Abstract
    Purpose - The paper aims to argue that digital library retrieval should be based on semantic representations and propose a semantic infrastructure for digital libraries. Design/methodology/approach - The approach taken is formal model based on subject representation for digital libraries. Findings - Search engines and search techniques have fallen short of user expectations as they do not give context based retrieval. Deploying semantic web technologies would lead to efficient and more precise representation of digital library content and hence better retrieval. Though digital libraries often have metadata of information resources which can be accessed through OAI-PMH, much remains to be accomplished in making digital libraries semantic web compliant. This paper presents a semantic infrastructure for digital libraries, that will go a long way in providing them and web based information services with products highly customised to users needs. Research limitations/implications - Here only a model for semantic infrastructure is proposed. This model is proposed after studying current user-centric, top-down models adopted in digital library service architectures. Originality/value - This paper gives a generic model for building semantic infrastructure for digital libraries. Faceted ontologies for digital libraries is just one approach. But the same may be adopted by groups working with different approaches in building ontologies to realise efficient retrieval in digital libraries.
  17. Handbook on ontologies (2004) 0.00
    0.0029727998 = product of:
      0.041619197 = sum of:
        0.041619197 = weight(_text_:representation in 1952) [ClassicSimilarity], result of:
          0.041619197 = score(doc=1952,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35945266 = fieldWeight in 1952, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1952)
      0.071428575 = coord(1/14)
    
    LCSH
    Knowledge representation (Information theory)
    Subject
    Knowledge representation (Information theory)
  18. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.00
    0.0029727998 = product of:
      0.041619197 = sum of:
        0.041619197 = weight(_text_:representation in 5309) [ClassicSimilarity], result of:
          0.041619197 = score(doc=5309,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35945266 = fieldWeight in 5309, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
      0.071428575 = coord(1/14)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
  19. Slimani, T.: Semantic annotation : the mainstay of Semantic Web (2013) 0.00
    0.0029429218 = product of:
      0.041200902 = sum of:
        0.041200902 = weight(_text_:representation in 4307) [ClassicSimilarity], result of:
          0.041200902 = score(doc=4307,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 4307, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4307)
      0.071428575 = coord(1/14)
    
    Abstract
    Given that semantic Web realization is based on the critical mass of metadata accessibility and the representation of data with formal knowledge, it needs to generate metadata that is specific, easy to understand and well-defined. However, semantic annotation of the web documents is the successful way to make the Semantic Web vision a reality. This paper introduces the Semantic Web and its vision (stack layers) with regard to some concept definitions that helps the understanding of semantic annotation. Additionally, this paper introduces the semantic annotation categories, tools, domains and models.
  20. Gibbins, N.; Shadbolt, N.: Resource Description Framework (RDF) (2009) 0.00
    0.0029429218 = product of:
      0.041200902 = sum of:
        0.041200902 = weight(_text_:representation in 4695) [ClassicSimilarity], result of:
          0.041200902 = score(doc=4695,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 4695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4695)
      0.071428575 = coord(1/14)
    
    Abstract
    The Resource Description Framework (RDF) is the standard knowledge representation language for the Semantic Web, an evolution of the World Wide Web that aims to provide a well-founded infrastructure for publishing, sharing and querying structured data. This entry provides an introduction to RDF and its related vocabulary definition language RDF Schema, and explains its relationship with the OWL Web Ontology Language. Finally, it provides an overview of the historical development of RDF and related languages for Web metadata.

Years

Languages

  • e 92
  • d 15

Types

  • a 64
  • el 26
  • m 24
  • s 13
  • n 2
  • r 1
  • x 1
  • More… Less…

Subjects

Classifications