Search (255 results, page 2 of 13)

  • × language_ss:"e"
  • × theme_ss:"Semantic Web"
  1. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a large ontology from Wikipedia and WordNet (2008) 0.06
    0.06473547 = product of:
      0.0971032 = sum of:
        0.06606405 = weight(_text_:wide in 3404) [ClassicSimilarity], result of:
          0.06606405 = score(doc=3404,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.29372054 = fieldWeight in 3404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3404)
        0.031039147 = product of:
          0.062078293 = sum of:
            0.062078293 = weight(_text_:web in 3404) [ClassicSimilarity], result of:
              0.062078293 = score(doc=3404,freq=6.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.37471575 = fieldWeight in 3404, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3404)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Web semantics: science, services and agents on the World Wide Web. 6(2008) no.3, S.203-217
    Theme
    Semantic Web
  2. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.06
    0.06473547 = product of:
      0.0971032 = sum of:
        0.06606405 = weight(_text_:wide in 761) [ClassicSimilarity], result of:
          0.06606405 = score(doc=761,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.29372054 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
        0.031039147 = product of:
          0.062078293 = sum of:
            0.062078293 = weight(_text_:web in 761) [ClassicSimilarity], result of:
              0.062078293 = score(doc=761,freq=6.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.37471575 = fieldWeight in 761, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=761)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
    Theme
    Semantic Web
  3. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.06
    0.063485466 = product of:
      0.095228195 = sum of:
        0.05721316 = weight(_text_:wide in 4401) [ClassicSimilarity], result of:
          0.05721316 = score(doc=4401,freq=6.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.2543695 = fieldWeight in 4401, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
        0.038015034 = product of:
          0.07603007 = sum of:
            0.07603007 = weight(_text_:web in 4401) [ClassicSimilarity], result of:
              0.07603007 = score(doc=4401,freq=36.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.45893115 = fieldWeight in 4401, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4401)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies (XML/RDF). It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text.
    Content
    Inhalt: OIL and DAML + OIL: Ontology Languages for the Semantic Web (pages 11-31) / Dieter Fensel, Frank van Harmelen and Ian Horrocks A Methodology for Ontology-Based Knowledge Management (pages 33-46) / York Sure and Rudi Studer Ontology Management: Storing, Aligning and Maintaining Ontologies (pages 47-69) / Michel Klein, Ying Ding, Dieter Fensel and Borys Omelayenko Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema (pages 71-89) / Jeen Broekstra, Arjohn Kampman and Frank van Harmelen Generating Ontologies for the Semantic Web: OntoBuilder (pages 91-115) / R. H. P. Engels and T. Ch. Lech OntoEdit: Collaborative Engineering of Ontologies (pages 117-132) / York Sure, Michael Erdmann and Rudi Studer QuizRDF: Search Technology for the Semantic Web (pages 133-144) / John Davies, Richard Weeks and Uwe Krohn Spectacle (pages 145-159) / Christiaan Fluit, Herko ter Horst, Jos van der Meer, Marta Sabou and Peter Mika OntoShare: Evolving Ontologies in a Knowledge Sharing System (pages 161-177) / John Davies, Alistair Duke and Audrius Stonkus Ontology Middleware and Reasoning (pages 179-196) / Atanas Kiryakov, Kiril Simov and Damyan Ognyanov Ontology-Based Knowledge Management at Work: The Swiss Life Case Studies (pages 197-218) / Ulrich Reimer, Peter Brockhausen, Thorsten Lau and Jacqueline R. Reich Field Experimenting with Semantic Web Tools in a Virtual Organization (pages 219-244) / Victor Iosif, Peter Mika, Rikard Larsson and Hans Akkermans A Future Perspective: Exploiting Peer-To-Peer and the Semantic Web for Knowledge Management (pages 245-264) / Dieter Fensel, Steffen Staab, Rudi Studer, Frank van Harmelen and John Davies Conclusions: Ontology-driven Knowledge Management - Towards the Semantic Web? (pages 265-266) / John Davies, Dieter Fensel and Frank van Harmelen
    LCSH
    Semantic web
    RSWK
    Semantic Web / Wissensmanagement / Wissenserwerb
    Wissensmanagement / World Wide web (BVB)
    Subject
    Semantic Web / Wissensmanagement / Wissenserwerb
    Wissensmanagement / World Wide web (BVB)
    Semantic web
    Theme
    Semantic Web
  4. Engels, R.H.P.; Lech, T.Ch.: Generating ontologies for the Semantic Web : OntoBuilder (2004) 0.06
    0.06315294 = product of:
      0.09472941 = sum of:
        0.044042703 = weight(_text_:wide in 4404) [ClassicSimilarity], result of:
          0.044042703 = score(doc=4404,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 4404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4404)
        0.05068671 = product of:
          0.10137342 = sum of:
            0.10137342 = weight(_text_:web in 4404) [ClassicSimilarity], result of:
              0.10137342 = score(doc=4404,freq=36.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.6119082 = fieldWeight in 4404, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4404)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Significant progress has been made in technologies for publishing and distributing knowledge and information on the web. However, much of the published information is not organized, and it is hard to find answers to questions that require more than a keyword search. In general, one can say that the web is organizing itself. Information is often published in relatively ad hoc fashion. Typically, concern about the presentation of content has been limited to purely layout issues. This, combined with the fact that the representation language used on the World Wide Web (HTML) is mainly format-oriented, makes publishing on the WWW easy, giving it an enormous expressiveness. People add private, educational or organizational content to the web that is of an immensely diverse nature. Content on the web is growing closer to a real universal knowledge base, with one problem relatively undefined; the problem of the interpretation of its contents. Although widely acknowledged for its general and universal advantages, the increasing popularity of the web also shows us some major drawbacks. The developments of the information content on the web during the last year alone, clearly indicates the need for some changes. Perhaps one of the most significant problems with the web as a distributed information system is the difficulty of finding and comparing information.
    Thus, there is a clear need for the web to become more semantic. The aim of introducing semantics into the web is to enhance the precision of search, but also enable the use of logical reasoning on web contents in order to answer queries. The CORPORUM OntoBuilder toolset is developed specifically for this task. It consists of a set of applications that can fulfil a variety of tasks, either as stand-alone tools, or augmenting each other. Important tasks that are dealt with by CORPORUM are related to document and information retrieval (find relevant documents, or support the user finding them), as well as information extraction (building a knowledge base from web documents to answer queries), information dissemination (summarizing strategies and information visualization), and automated document classification strategies. First versions of the toolset are encouraging in that they show large potential as a supportive technology for building up the Semantic Web. In this chapter, methods for transforming the current web into a semantic web are discussed, as well as a technical solution that can perform this task: the CORPORUM tool set. First, the toolset is introduced; followed by some pragmatic issues relating to the approach; then there will be a short overview of the theory in relation to CognIT's vision; and finally, a discussion on some of the applications that arose from the project.
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
    Theme
    Semantic Web
  5. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.06
    0.06315294 = product of:
      0.09472941 = sum of:
        0.044042703 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.044042703 = score(doc=79,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.05068671 = product of:
          0.10137342 = sum of:
            0.10137342 = weight(_text_:web in 79) [ClassicSimilarity], result of:
              0.10137342 = score(doc=79,freq=36.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.6119082 = fieldWeight in 79, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  6. Bizer, C.; Mendes, P.N.; Jentzsch, A.: Topology of the Web of Data (2012) 0.06
    0.062200885 = product of:
      0.093301326 = sum of:
        0.044042703 = weight(_text_:wide in 425) [ClassicSimilarity], result of:
          0.044042703 = score(doc=425,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
        0.049258627 = product of:
          0.098517254 = sum of:
            0.098517254 = weight(_text_:web in 425) [ClassicSimilarity], result of:
              0.098517254 = score(doc=425,freq=34.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.59466785 = fieldWeight in 425, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=425)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The degree of structure of Web content is the determining factor for the types of functionality that search engines can provide. The more well structured the Web content is, the easier it is for search engines to understand Web content and provide advanced functionality, such as faceted filtering or the aggregation of content from multiple Web sites, based on this understanding. Today, most Web sites are generated from structured data that is stored in relational databases. Thus, it does not require too much extra effort for Web sites to publish this structured data directly on the Web in addition to HTML pages, and thus help search engines to understand Web content and provide improved functionality. An early approach to realize this idea and help search engines to understand Web content is Microformats, a technique for markingup structured data about specific types on entities-such as tags, blog posts, people, or reviews-within HTML pages. As Microformats are focused on a few entity types, the World Wide Web Consortium (W3C) started in 2004 to standardize RDFa as an alternative, more generic language for embedding any type of data into HTML pages. Today, major search engines such as Google, Yahoo, and Bing extract Microformat and RDFa data describing products, reviews, persons, events, and recipes from Web pages and use the extracted data to improve the user's search experience. The search engines have started to aggregate structured data from different Web sites and augment their search results with these aggregated information units in the form of rich snippets which combine, for instance, data This chapter gives an overview of the topology of the Web of Data that has been created by publishing data on the Web using the microformats RDFa, Microdata and Linked Data publishing techniques.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
    Theme
    Semantic Web
  7. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.06
    0.0620645 = product of:
      0.09309675 = sum of:
        0.04767763 = weight(_text_:wide in 468) [ClassicSimilarity], result of:
          0.04767763 = score(doc=468,freq=6.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.21197456 = fieldWeight in 468, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.045419123 = product of:
          0.090838246 = sum of:
            0.090838246 = weight(_text_:web in 468) [ClassicSimilarity], result of:
              0.090838246 = score(doc=468,freq=74.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.548316 = fieldWeight in 468, product of:
                  8.602325 = tf(freq=74.0), with freq of:
                    74.0 = termFreq=74.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
    LCSH
    Semantic Web
    Subject
    Semantic Web
    Theme
    Semantic Web
  8. Michon, J.: Biomedicine and the Semantic Web : a knowledge model for visual phenotype (2006) 0.06
    0.061088912 = product of:
      0.091633365 = sum of:
        0.055053383 = weight(_text_:wide in 246) [ClassicSimilarity], result of:
          0.055053383 = score(doc=246,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=246)
        0.03657998 = product of:
          0.07315996 = sum of:
            0.07315996 = weight(_text_:web in 246) [ClassicSimilarity], result of:
              0.07315996 = score(doc=246,freq=12.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.4416067 = fieldWeight in 246, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=246)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Semantic Web tools provide new and significant opportunities for organizing and improving the utility of biomedical information. As librarians become more involved with biomedical information, it is important for them, particularly catalogers, to be part of research teams that are employing these techniques and developing a high level interoperable biomedical infrastructure. To illustrate these principles, we used Semantic Web tools to create a knowledge model for human visual phenotypes (observable characteristics). This is an important foundation for generating associations between genomics and clinical medicine. In turn this can allow customized medical therapies and provide insights into the molecular basis of disease. The knowledge model incorporates a wide variety of clinical and genomic data including examination findings, demographics, laboratory tests, imaging and variations in DNA sequence. Information organization, storage and retrieval are facilitated through the use of metadata and the ability to make computable statements in the visual science domain. This paper presents our work, discusses the value of Semantic Web technologies in biomedicine, and identifies several important roles that library and information scientists can play in developing a more powerful biomedical information infrastructure.
    Footnote
    Simultaneously published as Knitting the Semantic Web
    Theme
    Semantic Web
  9. Fensel, D.; Harmelen, F. van; Horrocks, I.: OIL and DAML+OIL : ontology languages for the Semantic Web (2004) 0.06
    0.061088912 = product of:
      0.091633365 = sum of:
        0.055053383 = weight(_text_:wide in 3244) [ClassicSimilarity], result of:
          0.055053383 = score(doc=3244,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 3244, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3244)
        0.03657998 = product of:
          0.07315996 = sum of:
            0.07315996 = weight(_text_:web in 3244) [ClassicSimilarity], result of:
              0.07315996 = score(doc=3244,freq=12.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.4416067 = fieldWeight in 3244, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3244)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This chapter discusses OIL and DAML1OIL, currently the most prominent ontology languages for the Semantic Web. The chapter starts by discussing the pyramid of languages that underlie the architecture of the Semantic Web (XML, RDF, RDFS). In section 2.2, we briefly describe XML, RDF and RDFS. We then discuss in more detail OIL and DAML1OIL, the first proposals for languages at the ontology layer of the semantic pyramid. For OIL (and to some extent DAML1OIL) we discuss the general design motivations (Section 2.3), describe the constructions in the language (Section 2.4), and the various syntactic forms of these languages (Section 2.5). Section 2.6 discusses the layered architecture of the language, section 2.7 briefly mentions the formal semantics, section 2.8 discusses the transition from OIL to DAML+OIL, and section 2.9 concludes with our experience with the language to date and future development in the context of the World Wide Web Consortium (W3C). This chapter is not intended to give full and formal definitions of either the syntax or the semantics of OIL or DAML1OIL. Such definitions are already available elsewhere: http://www.ontoknowledge.org/oil/ for OIL and http://www.w3.org/submission/2001/12/ for DAML1OIL.
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
    Theme
    Semantic Web
  10. OWL Web Ontology Language Guide (2004) 0.06
    0.061088912 = product of:
      0.091633365 = sum of:
        0.055053383 = weight(_text_:wide in 4687) [ClassicSimilarity], result of:
          0.055053383 = score(doc=4687,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 4687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4687)
        0.03657998 = product of:
          0.07315996 = sum of:
            0.07315996 = weight(_text_:web in 4687) [ClassicSimilarity], result of:
              0.07315996 = score(doc=4687,freq=12.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.4416067 = fieldWeight in 4687, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4687)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
    Theme
    Semantic Web
  11. SKOS Simple Knowledge Organization System Primer (2009) 0.06
    0.06093827 = product of:
      0.0914074 = sum of:
        0.06606405 = weight(_text_:wide in 4795) [ClassicSimilarity], result of:
          0.06606405 = score(doc=4795,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.29372054 = fieldWeight in 4795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4795)
        0.025343355 = product of:
          0.05068671 = sum of:
            0.05068671 = weight(_text_:web in 4795) [ClassicSimilarity], result of:
              0.05068671 = score(doc=4795,freq=4.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.3059541 = fieldWeight in 4795, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4795)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    SKOS (Simple Knowledge Organisation System) provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other types of controlled vocabulary. As an application of the Resource Description Framework (RDF) SKOS allows concepts to be documented, linked and merged with other data, while still being composed, integrated and published on the World Wide Web. This document is an implementors guide for those who would like to represent their concept scheme using SKOS. In basic SKOS, conceptual resources (concepts) can be identified using URIs, labelled with strings in one or more natural languages, documented with various types of notes, semantically related to each other in informal hierarchies and association networks, and aggregated into distinct concept schemes. In advanced SKOS, conceptual resources can be mapped to conceptual resources in other schemes and grouped into labelled or ordered collections. Concept labels can also be related to each other. Finally, the SKOS vocabulary itself can be extended to suit the needs of particular communities of practice.
    Theme
    Semantic Web
  12. Bizer, C.; Heath, T.: Linked Data : evolving the web into a global data space (2011) 0.06
    0.060208753 = product of:
      0.09031313 = sum of:
        0.044042703 = weight(_text_:wide in 4725) [ClassicSimilarity], result of:
          0.044042703 = score(doc=4725,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 4725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4725)
        0.046270426 = product of:
          0.09254085 = sum of:
            0.09254085 = weight(_text_:web in 4725) [ClassicSimilarity], result of:
              0.09254085 = score(doc=4725,freq=30.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.5585932 = fieldWeight in 4725, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4725)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.
    Content
    Inhalt: Introduction - Principles ofLinked Data - The Web ofData - Linked Data Design Considerations - Linked Data Design Considerations - Consuming Linked Data - Summary and Outlook Vgl.: http://linkeddatabook.com/book.
    RSWK
    Semantic Web / Forschungsergebnis / Forschung / Daten / Hyperlink
    Series
    Synthesis lectures on the semantic web: theory and technology ; 1
    Subject
    Semantic Web / Forschungsergebnis / Forschung / Daten / Hyperlink
    Theme
    Semantic Web
  13. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.06
    0.059972547 = product of:
      0.17991763 = sum of:
        0.17991763 = sum of:
          0.08362881 = weight(_text_:web in 4643) [ClassicSimilarity], result of:
            0.08362881 = score(doc=4643,freq=2.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.50479853 = fieldWeight in 4643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.109375 = fieldNorm(doc=4643)
          0.09628883 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
            0.09628883 = score(doc=4643,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.5416616 = fieldWeight in 4643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=4643)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
    Theme
    Semantic Web
  14. Fernández, M.; Cantador, I.; López, V.; Vallet, D.; Castells, P.; Motta, E.: Semantically enhanced Information Retrieval : an ontology-based approach (2011) 0.06
    0.059333354 = product of:
      0.08900003 = sum of:
        0.062285792 = weight(_text_:wide in 230) [ClassicSimilarity], result of:
          0.062285792 = score(doc=230,freq=4.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.2769224 = fieldWeight in 230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=230)
        0.026714243 = product of:
          0.053428486 = sum of:
            0.053428486 = weight(_text_:web in 230) [ClassicSimilarity], result of:
              0.053428486 = score(doc=230,freq=10.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.32250395 = fieldWeight in 230, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Currently, techniques for content description and query processing in Information Retrieval (IR) are based on keywords, and therefore provide limited capabilities to capture the conceptualizations associated with user needs and contents. Aiming to solve the limitations of keyword-based models, the idea of conceptual search, understood as searching by meanings rather than literal strings, has been the focus of a wide body of research in the IR field. More recently, it has been used as a prototypical scenario (or even envisioned as a potential "killer app") in the Semantic Web (SW) vision, since its emergence in the late nineties. However, current approaches to semantic search developed in the SW area have not yet taken full advantage of the acquired knowledge, accumulated experience, and technological sophistication achieved through several decades of work in the IR field. Starting from this position, this work investigates the definition of an ontology-based IR model, oriented to the exploitation of domain Knowledge Bases to support semantic search capabilities in large document repositories, stressing on the one hand the use of fully fledged ontologies in the semantic-based perspective, and on the other hand the consideration of unstructured content as the target search space. The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search. Additional contributions include: an innovative rank fusion technique that minimizes the undesired effects of knowledge sparseness on the yet juvenile SW, and the creation of a large-scale evaluation benchmark, based on TREC IR evaluation standards, which allows a rigorous comparison between IR and SW approaches. Conducted experiments show that our semantic search model obtained comparable and better performance results (in terms of MAP and P@10 values) than the best TREC automatic system.
    Source
    Web semantics: science, services and agents on the World Wide Web. 9(2011) no.4, S.434-452
    Theme
    Semantic Web
  15. McGuinness, D.L.: Ontologies come of age (2003) 0.06
    0.058964126 = product of:
      0.088446185 = sum of:
        0.055053383 = weight(_text_:wide in 3084) [ClassicSimilarity], result of:
          0.055053383 = score(doc=3084,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 3084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3084)
        0.033392806 = product of:
          0.06678561 = sum of:
            0.06678561 = weight(_text_:web in 3084) [ClassicSimilarity], result of:
              0.06678561 = score(doc=3084,freq=10.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.40312994 = fieldWeight in 3084, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3084)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies have moved beyond the domains of library science, philosophy, and knowledge representation. They are now the concerns of marketing departments, CEOs, and mainstream business. Research analyst companies such as Forrester Research report on the critical roles of ontologies in support of browsing and search for e-commerce and in support of interoperability for facilitation of knowledge management and configuration. One now sees ontologies used as central controlled vocabularies that are integrated into catalogues, databases, web publications, knowledge management applications, etc. Large ontologies are essential components in many online applications including search (such as Yahoo and Lycos), e-commerce (such as Amazon and eBay), configuration (such as Dell and PC-Order), etc. One also sees ontologies that have long life spans, sometimes in multiple projects (such as UMLS, SIC codes, etc.). Such diverse usage generates many implications for ontology environments. In this paper, we will discuss ontologies and requirements in their current instantiations on the web today. We will describe some desirable properties of ontologies. We will also discuss how both simple and complex ontologies are being and may be used to support varied applications. We will conclude with a discussion of emerging trends in ontologies and their environments and briefly mention our evolving ontology evolution environment.
    Source
    Spinning the Semantic Web: bringing the World Wide Web to its full potential. Eds.: D. Fensel u.a
    Theme
    Semantic Web
  16. Broekstra, J.; Kampman, A.; Harmelen, F. van: Sesame: a generic architecture for storing and querying RDF and RDF schema (2004) 0.06
    0.058964126 = product of:
      0.088446185 = sum of:
        0.055053383 = weight(_text_:wide in 4403) [ClassicSimilarity], result of:
          0.055053383 = score(doc=4403,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 4403, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4403)
        0.033392806 = product of:
          0.06678561 = sum of:
            0.06678561 = weight(_text_:web in 4403) [ClassicSimilarity], result of:
              0.06678561 = score(doc=4403,freq=10.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.40312994 = fieldWeight in 4403, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4403)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The resource description framework (RDF) is a W3C recommendation for the formulation of meta-data on the World Wide Web. RDF Schema (RDFS) extends this standard with the means to specify domain vocabulary and object structures. These techniques will enable the enrichment of the Web with machine-processable semantics, thus giving rise to what has been dubbed the Semantic Web. We have developed Sesame, an architecture for storage and querying of RDF and RDFS information. Sesame allows persistent storage of RDF data and schema information, and provides access methods to that information through export and querying modules. It features ways of caching information and offers support for concurrency control. This chapter is organized as follows: In Section 5.2 we discuss why a query language specifically tailored to RDF and RDFS is needed, over and above existing query languages such as XQuery. In Section 5.3 we look at Sesame's modular architecture in some detail. In Section 5.4 we give an overview of the SAIL API and a brief comparison to other RDF API approaches. Section 5.5 discusses our experiences with Sesame to date, and Section 5.6 looks into possible future developments. Finally, we provide our conclusions in Section 5.7.
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
    Theme
    Semantic Web
  17. OWL Web Ontology Language Test Cases (2004) 0.05
    0.053959724 = product of:
      0.16187917 = sum of:
        0.16187917 = sum of:
          0.10685697 = weight(_text_:web in 4685) [ClassicSimilarity], result of:
            0.10685697 = score(doc=4685,freq=10.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.6450079 = fieldWeight in 4685, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
          0.05502219 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
            0.05502219 = score(doc=4685,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.30952093 = fieldWeight in 4685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
      0.33333334 = coord(1/3)
    
    Abstract
    This document contains and presents test cases for the Web Ontology Language (OWL) approved by the Web Ontology Working Group. Many of the test cases illustrate the correct usage of the Web Ontology Language (OWL), and the formal meaning of its constructs. Other test cases illustrate the resolution of issues considered by the Working Group. Conformance for OWL documents and OWL document checkers is specified.
    Date
    14. 8.2011 13:33:22
    Theme
    Semantic Web
  18. Piscitelli, F.A.: Library linked data models : library data in the Semantic Web (2019) 0.05
    0.053946227 = product of:
      0.08091934 = sum of:
        0.055053383 = weight(_text_:wide in 5478) [ClassicSimilarity], result of:
          0.055053383 = score(doc=5478,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.025865955 = product of:
          0.05173191 = sum of:
            0.05173191 = weight(_text_:web in 5478) [ClassicSimilarity], result of:
              0.05173191 = score(doc=5478,freq=6.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.3122631 = fieldWeight in 5478, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5478)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This exploratory study examined Linked Data (LD) schemas/ontologies and data models proposed or in use by libraries around the world using MAchine Readable Cataloging (MARC) as a basis for comparison of the scope and extensibility of these potential new standards. The researchers selected 14 libraries from national libraries, academic libraries, government libraries, public libraries, multi-national libraries, and cultural heritage centers currently developing Library Linked Data (LLD) schemas. The choices of models, schemas, and elements used in each library's LD can create interoperability issues for LD services because of substantial differences between schemas and data models evolving via local decisions. The researchers observed that a wide variety of vocabularies and ontologies were used for LLD including common web schemas such as Dublin Core (DC)/DCTerms, Schema.org and Resource Description Framework (RDF), as well as deprecated schemas such as MarcOnt and rdagroup1elements. A sharp divide existed as well between LLD schemas using variations of the Functional Requirements for Bibliographic Records (FRBR) data model and those with different data models or even with no listed data model. Libraries worldwide are not using the same elements or even the same ontologies, schemas and data models to describe the same materials using the same general concepts.
    Theme
    Semantic Web
  19. Legg, C.: Ontologies on the Semantic Web (2007) 0.05
    0.051889233 = product of:
      0.077833846 = sum of:
        0.044042703 = weight(_text_:wide in 1979) [ClassicSimilarity], result of:
          0.044042703 = score(doc=1979,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 1979, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1979)
        0.033791143 = product of:
          0.06758229 = sum of:
            0.06758229 = weight(_text_:web in 1979) [ClassicSimilarity], result of:
              0.06758229 = score(doc=1979,freq=16.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.4079388 = fieldWeight in 1979, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1979)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The "Semantic Web" is touted by its developers as equally revolutionary, although it has not yet achieved anything like the Web's exponential uptake. It seeks to transcend a current limitation of the Web - that it largely requires indexing to be accomplished merely on specific character strings. Thus, a person searching for information about "turkey" (the bird) receives from current search engines many irrelevant pages about "Turkey" (the country) and nothing about the Spanish "pavo" even if he or she is a Spanish-speaker able to understand such pages. The Semantic Web vision is to develop technology to facilitate retrieval of information via meanings, not just spellings. For this to be possible, most commentators believe, Semantic Web applications will have to draw on some kind of shared, structured, machine-readable conceptual scheme. Thus, there has been a convergence between the Semantic Web research community and an older tradition with roots in classical Artificial Intelligence (AI) research (sometimes referred to as "knowledge representation") whose goal is to develop a formal ontology. A formal ontology is a machine-readable theory of the most fundamental concepts or "categories" required in order to understand information pertaining to any knowledge domain. A review of the attempts that have been made to realize this goal provides an opportunity to reflect in interestingly concrete ways on various research questions such as the following: - How explicit a machine-understandable theory of meaning is it possible or practical to construct? - How universal a machine-understandable theory of meaning is it possible or practical to construct? - How much (and what kind of) inference support is required to realize a machine-understandable theory of meaning? - What is it for a theory of meaning to be machine-understandable anyway?
    Theme
    Semantic Web
  20. Kiryakov, A.; Popov, B.; Terziev, I.; Manov, D.; Ognyanoff, D.: Semantic annotation, indexing, and retrieval (2004) 0.05
    0.051889233 = product of:
      0.077833846 = sum of:
        0.044042703 = weight(_text_:wide in 700) [ClassicSimilarity], result of:
          0.044042703 = score(doc=700,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=700)
        0.033791143 = product of:
          0.06758229 = sum of:
            0.06758229 = weight(_text_:web in 700) [ClassicSimilarity], result of:
              0.06758229 = score(doc=700,freq=16.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.4079388 = fieldWeight in 700, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=700)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Semantic Web realization depends on the availability of a critical mass of metadata for the web content, associated with the respective formal knowledge about the world. We claim that the Semantic Web, at its current stage of development, is in a state of a critical need of metadata generation and usage schemata that are specific, well-defined and easy to understand. This paper introduces our vision for a holistic architecture for semantic annotation, indexing, and retrieval of documents with regard to extensive semantic repositories. A system (called KIM), implementing this concept, is presented in brief and it is used for the purposes of evaluation and demonstration. A particular schema for semantic annotation with respect to real-world entities is proposed. The underlying philosophy is that a practical semantic annotation is impossible without some particular knowledge modelling commitments. Our understanding is that a system for such semantic annotation should be based upon a simple model of real-world entity classes, complemented with extensive instance knowledge. To ensure the efficiency, ease of sharing, and reusability of the metadata, we introduce an upper-level ontology (of about 250 classes and 100 properties), which starts with some basic philosophical distinctions and then goes down to the most common entity types (people, companies, cities, etc.). Thus it encodes many of the domain-independent commonsense concepts and allows straightforward domain-specific extensions. On the basis of the ontology, a large-scale knowledge base of entity descriptions is bootstrapped, and further extended and maintained. Currently, the knowledge bases usually scales between 105 and 106 descriptions. Finally, this paper presents a semantically enhanced information extraction system, which provides automatic semantic annotation with references to classes in the ontology and to instances. The system has been running over a continuously growing document collection (currently about 0.5 million news articles), so it has been under constant testing and evaluation for some time now. On the basis of these semantic annotations, we perform semantic based indexing and retrieval where users can mix traditional information retrieval (IR) queries and ontology-based ones. We argue that such large-scale, fully automatic methods are essential for the transformation of the current largely textual web into a Semantic Web.
    Source
    Web semantics: science, services and agents on the World Wide Web. 2(2004) no.1, S.49-79
    Theme
    Semantic Web

Years

Types

  • a 151
  • el 76
  • m 48
  • s 19
  • n 10
  • x 4
  • r 2
  • More… Less…

Subjects

Classifications