Search (274 results, page 14 of 14)

  • × theme_ss:"Semantic Web"
  1. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.00
    1.6278966E-4 = product of:
      0.0024418447 = sum of:
        0.0024418447 = product of:
          0.0048836893 = sum of:
            0.0048836893 = weight(_text_:information in 553) [ClassicSimilarity], result of:
              0.0048836893 = score(doc=553,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0960027 = fieldWeight in 553, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=553)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
    Content
    Präsentation anlässlich des 'UDC Seminar: Information Access for the Global Community, The Hague, 4-5 June 2007'
  2. Heery, R.; Wagner, H.: ¬A metadata registry for the Semantic Web (2002) 0.00
    1.6278966E-4 = product of:
      0.0024418447 = sum of:
        0.0024418447 = product of:
          0.0048836893 = sum of:
            0.0048836893 = weight(_text_:information in 1210) [ClassicSimilarity], result of:
              0.0048836893 = score(doc=1210,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0960027 = fieldWeight in 1210, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    * Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179 (This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.); * The xml.org directory of the Extended Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); * The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; * The Semantic Web Agreement Group Dictionary, a database of terms for the Semantic Web that can be referred to by humans and software agents; * LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; * The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives. Metadata registries essentially provide an index of terms. Given the distributed nature of the Web, there are a number of ways this can be accomplished. For example, the registry could link to terms and definitions in schemas published by implementers and stored locally by the schema maintainer. Alternatively, the registry might harvest various metadata schemas from their maintainers. Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. An important benefit of this approach is an increase in the reuse of existing terms, rather than users having to reinvent them. Merging schemas to one view leads to harmonization between applications and helps avoid duplication of effort. Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process.
  3. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.00
    1.424113E-4 = product of:
      0.0021361695 = sum of:
        0.0021361695 = product of:
          0.004272339 = sum of:
            0.004272339 = weight(_text_:information in 468) [ClassicSimilarity], result of:
              0.004272339 = score(doc=468,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.083984874 = fieldWeight in 468, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    Series
    Cooperative information systems
  4. Veltman, K.H.: Syntactic and semantic interoperability : new approaches to knowledge and the Semantic Web (2001) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 3883) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=3883,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 3883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3883)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    New review of information networking. 7(2001) no.xx, S.xx-xx
  5. Isaac, A.; Schlobach, S.; Matthezing, H.; Zinn, C.: Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies (2008) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 3398) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=3398,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 3398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3398)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    This paper is based on a talk given at "Information Access for the Global Community, An International Seminar on the Universal Decimal Classification" held on 4-5 June 2007 in The Hague, The Netherlands. An abstract of this talk will be published in Extensions and Corrections to the UDC, an annual publication of the UDC consortium. Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  6. Veltman, K.H.: Towards a Semantic Web for culture 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 4040) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=4040,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 4040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4040)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of digital information. 4(2004), no.4
  7. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. (2010) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 4707) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=4707,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 4707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4707)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
  8. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 4709) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=4709,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 4709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  9. Bizer, C.; Heath, T.: Linked Data : evolving the web into a global data space (2011) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 4725) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=4725,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 4725, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4725)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.
  10. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 4796) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=4796,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 4796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4796)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
  11. Semantic search over the Web (2012) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 411) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=411,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=411)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The Web has become the world's largest database, with search being the main tool that allows organizations and individuals to exploit its huge amount of information. Search on the Web has been traditionally based on textual and structural similarities, ignoring to a large degree the semantic dimension, i.e., understanding the meaning of the query and of the document content. Combining search and semantics gives birth to the idea of semantic search. Traditional search engines have already advertised some semantic dimensions. Some of them, for instance, can enhance their generated result sets with documents that are semantically related to the query terms even though they may not include these terms. Nevertheless, the exploitation of the semantic search has not yet reached its full potential. In this book, Roberto De Virgilio, Francesco Guerra and Yannis Velegrakis present an extensive overview of the work done in Semantic Search and other related areas. They explore different technologies and solutions in depth, making their collection a valuable and stimulating reading for both academic and industrial researchers. The book is divided into three parts. The first introduces the readers to the basic notions of the Web of Data. It describes the different kinds of data that exist, their topology, and their storing and indexing techniques. The second part is dedicated to Web Search. It presents different types of search, like the exploratory or the path-oriented, alongside methods for their efficient and effective implementation. Other related topics included in this part are the use of uncertainty in query answering, the exploitation of ontologies, and the use of semantics in mashup design and operation. The focus of the third part is on linked data, and more specifically, on applying ideas originating in recommender systems on linked data management, and on techniques for the efficiently querying answering on linked data.
  12. Bizer, C.; Mendes, P.N.; Jentzsch, A.: Topology of the Web of Data (2012) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 425) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=425,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=425)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The degree of structure of Web content is the determining factor for the types of functionality that search engines can provide. The more well structured the Web content is, the easier it is for search engines to understand Web content and provide advanced functionality, such as faceted filtering or the aggregation of content from multiple Web sites, based on this understanding. Today, most Web sites are generated from structured data that is stored in relational databases. Thus, it does not require too much extra effort for Web sites to publish this structured data directly on the Web in addition to HTML pages, and thus help search engines to understand Web content and provide improved functionality. An early approach to realize this idea and help search engines to understand Web content is Microformats, a technique for markingup structured data about specific types on entities-such as tags, blog posts, people, or reviews-within HTML pages. As Microformats are focused on a few entity types, the World Wide Web Consortium (W3C) started in 2004 to standardize RDFa as an alternative, more generic language for embedding any type of data into HTML pages. Today, major search engines such as Google, Yahoo, and Bing extract Microformat and RDFa data describing products, reviews, persons, events, and recipes from Web pages and use the extracted data to improve the user's search experience. The search engines have started to aggregate structured data from different Web sites and augment their search results with these aggregated information units in the form of rich snippets which combine, for instance, data This chapter gives an overview of the topology of the Web of Data that has been created by publishing data on the Web using the microformats RDFa, Microdata and Linked Data publishing techniques.
  13. Bianchini, D.; Antonellis, V. De: Linked data services and semantics-enabled mashup (2012) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 435) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=435,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 435, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=435)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The Web of Linked Data can be seen as a global database, where resources are identified through URIs, are self-described (by means of the URI dereferencing mechanism), and are globally connected through RDF links. According to the Linked Data perspective, research attention is progressively shifting from data organization and representation to linkage and composition of the huge amount of data available on the Web. For example, at the time of this writing, the DBpedia knowledge base describes more than 3.5 million things, conceptualized through 672 million RDF triples, with 6.5 million external links into other RDF datasets. Useful applications have been provided for enabling people to browse this wealth of data, like Tabulator. Other systems have been implemented to collect, index, and provide advanced searching facilities over the Web of Linked Data, such as Watson and Sindice. Besides these applications, domain-specific systems to gather and mash up Linked Data have been proposed, like DBpedia Mobile and Revyu . corn. DBpedia Mobile is a location-aware client for the semantic Web that can be used on an iPhone and other mobile devices. Based on the current GPS position of a mobile device, DBpedia Mobile renders a map indicating nearby locations from the DBpedia dataset. Starting from this map, the user can explore background information about his or her surroundings. Revyu . corn is a Web site where you can review and rate whatever is possible to identify (through a URI) on the Web. Nevertheless, the potential advantages implicit in the Web of Linked Data are far from being fully exploited. Current applications hardly go beyond presenting together data gathered from different sources. Recently, research on the Web of Linked Data has been devoted to the study of models and languages to add functionalities to the Web of Linked Data by means of Linked Data services.
  14. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.00
    1.1510967E-4 = product of:
      0.001726645 = sum of:
        0.001726645 = product of:
          0.00345329 = sum of:
            0.00345329 = weight(_text_:information in 517) [ClassicSimilarity], result of:
              0.00345329 = score(doc=517,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.06788416 = fieldWeight in 517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    This type of effort is common in the digital library community, where a group of experts will interact with a user community to create a thesaurus for a specific domain (e.g. the Art & Architecture Thesaurus AAT AAT) or an overarching classification scheme (e.g. the Dewey Decimal Classification). A similar type of activity is being undertaken more recently in a less centralised manner by web communities, producing for example the DMOZ web directory DMOZ, or the Topic Exchange for weblog topics Topic Exchange. The web, including the semantic web, provides a medium within which communities can interact and collaboratively build and use vocabularies of concepts. A simple language is required that allows these communities to express the structure and content of their vocabularies in a machine-understandable way, enabling exchange and reuse. The Resource Description Framework (RDF) is an ideal language for making statements about web resources and publishing metadata. However, RDF provides only the low level semantics required to form metadata statements. RDF vocabularies must be built on top of RDF to support the expression of more specific types of information within metadata. Ontology languages such as OWL OWL add a layer of expressive power to RDF, and provide powerful tools for defining complex conceptual structures, which can be used to generate rich metadata. However, the class-oriented, logically precise modelling required to construct useful web ontologies is demanding in terms of expertise, effort, and therefore cost. In many cases this type of modelling may be superfluous or unsuited to requirements. Therefore there is a need for a language for expressing vocabularies of concepts for use in semantically rich metadata, that is powerful enough to support semantically enhanced search, but simple enough to be undemanding in terms of the cost and expertise required to use it."

Years

Languages

  • e 187
  • d 84
  • f 1
  • More… Less…

Types

  • a 169
  • el 62
  • m 51
  • s 22
  • x 14
  • n 6
  • r 3
  • More… Less…

Subjects

Classifications