Search (62 results, page 1 of 4)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.061810058 = product of:
      0.15452515 = sum of:
        0.038631286 = product of:
          0.115893856 = sum of:
            0.115893856 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.115893856 = score(doc=701,freq=2.0), product of:
                0.3093153 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036484417 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.115893856 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.115893856 = score(doc=701,freq=2.0), product of:
            0.3093153 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036484417 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.03
    0.030142525 = product of:
      0.07535631 = sum of:
        0.012139657 = product of:
          0.024279313 = sum of:
            0.024279313 = weight(_text_:problems in 150) [ClassicSimilarity], result of:
              0.024279313 = score(doc=150,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.1612295 = fieldWeight in 150, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
        0.06321666 = sum of:
          0.041812256 = weight(_text_:etc in 150) [ClassicSimilarity], result of:
            0.041812256 = score(doc=150,freq=4.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.2115817 = fieldWeight in 150, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.021404404 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.021404404 = score(doc=150,freq=6.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.4 = coord(2/5)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
  3. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.02
    0.019272739 = product of:
      0.048181847 = sum of:
        0.012139657 = product of:
          0.024279313 = sum of:
            0.024279313 = weight(_text_:problems in 468) [ClassicSimilarity], result of:
              0.024279313 = score(doc=468,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.1612295 = fieldWeight in 468, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
        0.03604219 = product of:
          0.07208438 = sum of:
            0.07208438 = weight(_text_:exercises in 468) [ClassicSimilarity], result of:
              0.07208438 = score(doc=468,freq=4.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27780938 = fieldWeight in 468, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
  4. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.01
    0.0110863 = product of:
      0.02771575 = sum of:
        0.013734453 = product of:
          0.027468907 = sum of:
            0.027468907 = weight(_text_:problems in 2654) [ClassicSimilarity], result of:
              0.027468907 = score(doc=2654,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.18241036 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
        0.013981297 = product of:
          0.027962593 = sum of:
            0.027962593 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.027962593 = score(doc=2654,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  5. McGuinness, D.L.: Ontologies come of age (2003) 0.01
    0.010241869 = product of:
      0.051209345 = sum of:
        0.051209345 = product of:
          0.10241869 = sum of:
            0.10241869 = weight(_text_:etc in 3084) [ClassicSimilarity], result of:
              0.10241869 = score(doc=3084,freq=6.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5182672 = fieldWeight in 3084, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3084)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Ontologies have moved beyond the domains of library science, philosophy, and knowledge representation. They are now the concerns of marketing departments, CEOs, and mainstream business. Research analyst companies such as Forrester Research report on the critical roles of ontologies in support of browsing and search for e-commerce and in support of interoperability for facilitation of knowledge management and configuration. One now sees ontologies used as central controlled vocabularies that are integrated into catalogues, databases, web publications, knowledge management applications, etc. Large ontologies are essential components in many online applications including search (such as Yahoo and Lycos), e-commerce (such as Amazon and eBay), configuration (such as Dell and PC-Order), etc. One also sees ontologies that have long life spans, sometimes in multiple projects (such as UMLS, SIC codes, etc.). Such diverse usage generates many implications for ontology environments. In this paper, we will discuss ontologies and requirements in their current instantiations on the web today. We will describe some desirable properties of ontologies. We will also discuss how both simple and complex ontologies are being and may be used to support varied applications. We will conclude with a discussion of emerging trends in ontologies and their environments and briefly mention our evolving ontology evolution environment.
  6. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.01
    0.009700513 = product of:
      0.024251282 = sum of:
        0.012017647 = product of:
          0.024035294 = sum of:
            0.024035294 = weight(_text_:problems in 1155) [ClassicSimilarity], result of:
              0.024035294 = score(doc=1155,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.15960906 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
          0.5 = coord(1/2)
        0.012233635 = product of:
          0.02446727 = sum of:
            0.02446727 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
              0.02446727 = score(doc=1155,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19150631 = fieldWeight in 1155, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The MTSR 2013 program and the contents of these proceedings show a rich diversity of research and practices, drawing on problems from metadata and semantically focused tools and technologies, linked data, cross-language semantics, ontologies, metadata models, and semantic system and metadata standards. The general session of the conference included 18 papers covering a broad spectrum of topics, proving the interdisciplinary field of metadata, and was divided into three main themes: platforms for research data sets, system architecture and data management; metadata and ontology validation, evaluation, mapping and interoperability; and content management. Metadata as a research topic is maturing, and the conference also supported the following five tracks: Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures; Metadata and Semantics for Cultural Collections and Applications; Metadata and Semantics for Agriculture, Food and Environment; Big Data and Digital Libraries in Health, Science and Technology; and European and National Projects, and Project Networking. Each track had a rich selection of papers, giving broader diversity to MTSR, and enabling deeper exploration of significant topics.
    Date
    17.12.2013 12:51:22
  7. Severiens, T.; Thiemann, C.: RDF database for PhysNet and similar portals (2006) 0.01
    0.009461033 = product of:
      0.047305163 = sum of:
        0.047305163 = product of:
          0.094610326 = sum of:
            0.094610326 = weight(_text_:etc in 245) [ClassicSimilarity], result of:
              0.094610326 = score(doc=245,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.47875473 = fieldWeight in 245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0625 = fieldNorm(doc=245)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    PhysNet (www.physnet.net) is a portal for Physics run since 1995 and continuously being developed; it today uses an OWLLite ontology and mySQL database for storing triples with the facts, such as department information, postal addresses, GPS coordinates, URLs of publication repositories, etc. The article focuses on the structure and the development of the underlying ontology; it also gives a detailed overview of an online web-based editorial tool, to maintain the facts database.
  8. Breslin, J.G.: Social semantic information spaces (2009) 0.01
    0.008362452 = product of:
      0.041812256 = sum of:
        0.041812256 = product of:
          0.08362451 = sum of:
            0.08362451 = weight(_text_:etc in 3377) [ClassicSimilarity], result of:
              0.08362451 = score(doc=3377,freq=4.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.4231634 = fieldWeight in 3377, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3377)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The structural and syntactic web put in place in the early 90s is still much the same as what we use today: resources (web pages, files, etc.) connected by untyped hyperlinks. By untyped, we mean that there is no easy way for a computer to figure out what a link between two pages means - for example, on the W3C website, there are hundreds of links to the various organisations that are registered members of the association, but there is nothing explicitly saying that the link is to an organisation that is a "member of" the W3C or what type of organisation is represented by the link. On John's work page, he links to many papers he has written, but it does not explicitly say that he is the author of those papers or that he wrote such-and-such when he was working at a particular university. In fact, the Web was envisaged to be much more, as one can see from the image in Fig. 1 which is taken from Tim Berners Lee's original outline for the Web in 1989, entitled "Information Management: A Proposal". In this, all the resources are connected by links describing the type of relationships, e.g. "wrote", "describe", "refers to", etc. This is a precursor to the Semantic Web which we will come back to later.
  9. Hitzler, P.; Krötzsch, M.; Rudolph, S.: Foundations of Semantic Web technologies (2010) 0.01
    0.008155417 = product of:
      0.040777083 = sum of:
        0.040777083 = product of:
          0.08155417 = sum of:
            0.08155417 = weight(_text_:exercises in 359) [ClassicSimilarity], result of:
              0.08155417 = score(doc=359,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31430542 = fieldWeight in 359, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.03125 = fieldNorm(doc=359)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This text introduces the standardized knowledge representation languages for modeling ontologies operating at the core of the semantic web. It covers RDF schema, Web Ontology Language (OWL), rules, query languages, the OWL 2 revision, and the forthcoming Rule Interchange Format (RIF). A 2010 CHOICE Outstanding Academic Title ! The nine chapters of the book guide the reader through the major foundational languages for the semantic Web and highlight the formal semantics. ! the book has very interesting supporting material and exercises, is oriented to W3C standards, and provides the necessary foundations for the semantic Web. It will be easy to follow by the computer scientist who already has a basic background on semantic Web issues; it will also be helpful for both self-study and teaching purposes. I recommend this book primarily as a complementary textbook for a graduate or undergraduate course in a computer science or a Web science academic program. --Computing Reviews, February 2010 This book is unique in several respects. It contains an in-depth treatment of all the major foundational languages for the Semantic Web and provides a full treatment of the underlying formal semantics, which is central to the Semantic Web effort. It is also the very first textbook that addresses the forthcoming W3C recommended standards OWL 2 and RIF. Furthermore, the covered topics and underlying concepts are easily accessible for the reader due to a clear separation of syntax and semantics ! I am confident this book will be well received and play an important role in training a larger number of students who will seek to become proficient in this growing discipline.
  10. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a core of semantic knowledge unifying WordNet and Wikipedia (2007) 0.01
    0.007095774 = product of:
      0.03547887 = sum of:
        0.03547887 = product of:
          0.07095774 = sum of:
            0.07095774 = weight(_text_:etc in 3403) [ClassicSimilarity], result of:
              0.07095774 = score(doc=3403,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.35906604 = fieldWeight in 3403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
  11. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.06920389 = score(doc=4643,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  12. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.005931762 = product of:
      0.02965881 = sum of:
        0.02965881 = product of:
          0.05931762 = sum of:
            0.05931762 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.05931762 = score(doc=6048,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  13. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.01
    0.005931762 = product of:
      0.02965881 = sum of:
        0.02965881 = product of:
          0.05931762 = sum of:
            0.05931762 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.05931762 = score(doc=100,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  14. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.01
    0.0059131454 = product of:
      0.029565725 = sum of:
        0.029565725 = product of:
          0.05913145 = sum of:
            0.05913145 = weight(_text_:etc in 99) [ClassicSimilarity], result of:
              0.05913145 = score(doc=99,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2992217 = fieldWeight in 99, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
  15. Blanco, L.; Bronzi, M.; Crescenzi, V.; Merialdo, P.; Papotti, P.: Flint: from Web pages to probabilistic semantic data (2012) 0.01
    0.0059131454 = product of:
      0.029565725 = sum of:
        0.029565725 = product of:
          0.05913145 = sum of:
            0.05913145 = weight(_text_:etc in 437) [ClassicSimilarity], result of:
              0.05913145 = score(doc=437,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2992217 = fieldWeight in 437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=437)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Web is a surprisingly extensive source of information: it offers a huge number of sites containing data about a disparate range of topics. Although Web pages are built for human fruition, not for automatic processing of the data, we observe that an increasing number of Web sites deliver pages containing structured information about recognizable concepts, relevant to specific application domains, such as movies, finance, sport, products, etc. The development of scalable techniques to discover, extract, and integrate data from fairly structured large corpora available on the Web is a challenging issue, because to face the Web scale, these activities should be accomplished automatically by domain-independent techniques. To cope with the complexity and the heterogeneity of Web data, state-of-the-art approaches focus on information organized according to specific patterns that frequently occur on the Web. Meaningful examples are WebTables, which focuses on data published in HTML tables, and information extraction systems, such as TextRunner, which exploits lexical-syntactic patterns. As noticed by Cafarella et al., even if a small fraction of the Web is organized according to these patterns, due to the Web scale, the amount of data involved is impressive. In this chapter, we focus on methods and techniques to wring out value from the data delivered by large data-intensive Web sites.
  16. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.01
    0.0058537154 = product of:
      0.029268576 = sum of:
        0.029268576 = product of:
          0.05853715 = sum of:
            0.05853715 = weight(_text_:etc in 517) [ClassicSimilarity], result of:
              0.05853715 = score(doc=517,freq=4.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.29621437 = fieldWeight in 517, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
  17. Graves, M.; Constabaris, A.; Brickley, D.: FOAF: connecting people on the Semantic Web (2006) 0.01
    0.0054937815 = product of:
      0.027468907 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 248) [ClassicSimilarity], result of:
              0.054937813 = score(doc=248,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=248)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This article introduces the Friend of a Friend (FOAF) vocabulary specification as an example of a Semantic Web technology. A real world case study is presented in which FOAF is used to solve some specific problems of identity management. The main goal is to provide some basic theory behind the Semantic Web and then attempt to ground that theory in a practical solution.
  18. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.04943135 = score(doc=2090,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  19. Hebeler, J.; Fisher, M.; Blace, R.; Perez-Lopez, A.: Semantic Web programming (2009) 0.00
    0.0048070587 = product of:
      0.024035294 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 1541) [ClassicSimilarity], result of:
              0.048070587 = score(doc=1541,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 1541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1541)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The next major advance in the Web-Web 3.0-will be built on semantic Web technologies, which will allow data to be shared and reused across application, enterprise, and community boundaries. Written by a team of highly experienced Web developers, this book explains examines how this powerful new technology can unify and fully leverage the ever-growing data, information, and services that are available on the Internet. Helpful examples demonstrate how to use the semantic Web to solve practical, real-world problems while you take a look at the set of design principles, collaborative working groups, and technologies that form the semantic Web. The companion Web site features full code, as well as a reference section, a FAQ section, a discussion forum, and a semantic blog.
  20. Isaac, A.; Schlobach, S.; Matthezing, H.; Zinn, C.: Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies (2008) 0.00
    0.0047577545 = product of:
      0.023788773 = sum of:
        0.023788773 = product of:
          0.047577545 = sum of:
            0.047577545 = weight(_text_:problems in 3398) [ClassicSimilarity], result of:
              0.047577545 = score(doc=3398,freq=6.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31594402 = fieldWeight in 3398, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3398)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - To show how semantic web techniques can help address semantic interoperability issues in the broad cultural heritage domain, allowing users an integrated and seamless access to heterogeneous collections. Design/methodology/approach - This paper presents the heterogeneity problems to be solved. It introduces semantic web techniques that can help in solving them, focusing on the representation of controlled vocabularies and their semantic alignment. It gives pointers to some previous projects and experiments that have tried to address the problems discussed. Findings - Semantic web research provides practical technical and methodological approaches to tackle the different issues. Two contributions of interest are the simple knowledge organisation system model and automatic vocabulary alignment methods and tools. These contributions were demonstrated to be usable for enabling semantic search and navigation across collections. Research limitations/implications - The research aims at designing different representation and alignment methods for solving interoperability problems in the context of controlled subject vocabularies. Given the variety and technical richness of current research in the semantic web field, it is impossible to provide an in-depth account or an exhaustive list of references. Every aspect of the paper is, however, given one or several pointers for further reading. Originality/value - This article provides a general and practical introduction to relevant semantic web techniques. It is of specific value for the practitioners in the cultural heritage and digital library domains who are interested in applying these methods in practice.

Years

Languages

  • e 51
  • d 10

Types

  • a 35
  • el 14
  • m 14
  • s 5
  • x 2
  • n 1
  • More… Less…