Search (316 results, page 1 of 16)

  • × theme_ss:"Semantic Web"
  1. Resource Description Framework (RDF) : Concepts and Abstract Syntax (2004) 0.06
    0.05623451 = product of:
      0.09372418 = sum of:
        0.050544675 = weight(_text_:g in 3067) [ClassicSimilarity], result of:
          0.050544675 = score(doc=3067,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.331982 = fieldWeight in 3067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0625 = fieldNorm(doc=3067)
        0.038415954 = weight(_text_:u in 3067) [ClassicSimilarity], result of:
          0.038415954 = score(doc=3067,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.28942272 = fieldWeight in 3067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=3067)
        0.0047635464 = weight(_text_:a in 3067) [ClassicSimilarity], result of:
          0.0047635464 = score(doc=3067,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.10191591 = fieldWeight in 3067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3067)
      0.6 = coord(3/5)
    
    Abstract
    The Resource Description Framework (RDF) is a framework for representing information in the Web. RDF Concepts and Abstract Syntax defines an abstract syntax on which RDF is based, and which serves to link its concrete syntax to its formal semantics. It also includes discussion of design goals, key concepts, datatyping, character normalization and handling of URI references.
    Editor
    Klyne, G. u. J.C. Carroll
  2. Bechhofer, S.; Harmelen, F. van; Hendler, J.; Horrocks, I.; McGuinness, D.L.; Patel-Schneider, P.F.; Stein, L.A.: OWL Web Ontology Language Reference (2004) 0.05
    0.051706057 = product of:
      0.08617676 = sum of:
        0.044226594 = weight(_text_:g in 4684) [ClassicSimilarity], result of:
          0.044226594 = score(doc=4684,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.29048425 = fieldWeight in 4684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4684)
        0.03361396 = weight(_text_:u in 4684) [ClassicSimilarity], result of:
          0.03361396 = score(doc=4684,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.25324488 = fieldWeight in 4684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4684)
        0.008336206 = weight(_text_:a in 4684) [ClassicSimilarity], result of:
          0.008336206 = score(doc=4684,freq=8.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.17835285 = fieldWeight in 4684, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4684)
      0.6 = coord(3/5)
    
    Abstract
    The Web Ontology Language OWL is a semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL is developed as a vocabulary extension of RDF (the Resource Description Framework) and is derived from the DAML+OIL Web Ontology Language. This document contains a structured informal description of the full set of OWL language constructs and is meant to serve as a reference for OWL users who want to construct OWL ontologies.
    Editor
    Dean, M. u. G. Schreiber
  3. Zenz, G.; Zhou, X.; Minack, E.; Siberski, W.; Nejdl, W.: Interactive query construction for keyword search on the Semantic Web (2012) 0.04
    0.038086426 = product of:
      0.063477375 = sum of:
        0.031590424 = weight(_text_:g in 430) [ClassicSimilarity], result of:
          0.031590424 = score(doc=430,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.20748875 = fieldWeight in 430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0390625 = fieldNorm(doc=430)
        0.024009973 = weight(_text_:u in 430) [ClassicSimilarity], result of:
          0.024009973 = score(doc=430,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.1808892 = fieldWeight in 430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=430)
        0.0078769745 = weight(_text_:a in 430) [ClassicSimilarity], result of:
          0.0078769745 = score(doc=430,freq=14.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.1685276 = fieldWeight in 430, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=430)
      0.6 = coord(3/5)
    
    Abstract
    With the advance of the semantic Web, increasing amounts of data are available in a structured and machine-understandable form. This opens opportunities for users to employ semantic queries instead of simple keyword-based ones to accurately express the information need. However, constructing semantic queries is a demanding task for human users [11]. To compose a valid semantic query, a user has to (1) master a query language (e.g., SPARQL) and (2) acquire sufficient knowledge about the ontology or the schema of the data source. While there are systems which support this task with visual tools [21, 26] or natural language interfaces [3, 13, 14, 18], the process of query construction can still be complex and time consuming. According to [24], users prefer keyword search, and struggle with the construction of semantic queries although being supported with a natural language interface. Several keyword search approaches have already been proposed to ease information seeking on semantic data [16, 32, 35] or databases [1, 31]. However, keyword queries lack the expressivity to precisely describe the user's intent. As a result, ranking can at best put query intentions of the majority on top, making it impossible to take the intentions of all users into consideration.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  4. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.04
    0.03782757 = product of:
      0.06304595 = sum of:
        0.03361396 = weight(_text_:u in 1026) [ClassicSimilarity], result of:
          0.03361396 = score(doc=1026,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.25324488 = fieldWeight in 1026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.010209725 = weight(_text_:a in 1026) [ClassicSimilarity], result of:
          0.010209725 = score(doc=1026,freq=12.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.21843673 = fieldWeight in 1026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.019222261 = product of:
          0.038444523 = sum of:
            0.038444523 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
              0.038444523 = score(doc=1026,freq=2.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.2708308 = fieldWeight in 1026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1026)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  5. Blumauer, A.; Pellegrini, T.: Semantic Web Revisited : Eine kurze Einführung in das Social Semantic Web (2009) 0.04
    0.036033355 = product of:
      0.06005559 = sum of:
        0.03361396 = weight(_text_:u in 4855) [ClassicSimilarity], result of:
          0.03361396 = score(doc=4855,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.25324488 = fieldWeight in 4855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4855)
        0.0072193667 = weight(_text_:a in 4855) [ClassicSimilarity], result of:
          0.0072193667 = score(doc=4855,freq=6.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.1544581 = fieldWeight in 4855, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4855)
        0.019222261 = product of:
          0.038444523 = sum of:
            0.038444523 = weight(_text_:22 in 4855) [ClassicSimilarity], result of:
              0.038444523 = score(doc=4855,freq=2.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.2708308 = fieldWeight in 4855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4855)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Pages
    S.3-22
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
    Type
    a
  6. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.04
    0.035878066 = product of:
      0.04484758 = sum of:
        0.015795212 = weight(_text_:g in 150) [ClassicSimilarity], result of:
          0.015795212 = score(doc=150,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.10374437 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.012004986 = weight(_text_:u in 150) [ClassicSimilarity], result of:
          0.012004986 = score(doc=150,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.0904446 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.0051566907 = weight(_text_:a in 150) [ClassicSimilarity], result of:
          0.0051566907 = score(doc=150,freq=24.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.11032722 = fieldWeight in 150, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.011890691 = product of:
          0.023781382 = sum of:
            0.023781382 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.023781382 = score(doc=150,freq=6.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Editor
    Stamou, G. u. S. Kollias
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  7. Knitting the semantic Web (2007) 0.03
    0.032372072 = product of:
      0.053953454 = sum of:
        0.022113297 = weight(_text_:g in 1397) [ClassicSimilarity], result of:
          0.022113297 = score(doc=1397,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.14524212 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1397)
        0.02376866 = weight(_text_:u in 1397) [ClassicSimilarity], result of:
          0.02376866 = score(doc=1397,freq=4.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.17907117 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1397)
        0.008071497 = weight(_text_:a in 1397) [ClassicSimilarity], result of:
          0.008071497 = score(doc=1397,freq=30.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.17268941 = fieldWeight in 1397, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1397)
      0.6 = coord(3/5)
    
    Abstract
    The Semantic Web, the extension that goes beyond the current Web, better enables computers and people to effectively work together by giving information well-defined meaning. Knitting the Semantic Web explains the interdisciplinary efforts underway to build a more library-like Web through "semantic knitting." The book examines tagging information with standardized semantic metadata to result in a network able to support computational activities and provide people with services efficiently. Leaders in library and information science, computer science, and information intensive domains provide insight and inspiration to give readers a greater understanding in the development, growth, and maintenance of the Semantic Web. Librarians are uniquely qualified to play a major role in the development and maintenance of the Semantic Web. Knitting the Semantic Web closely examines this crucial relationship in detail. This single source reviews the foundations, standards, and tools of the Semantic Web, as well as discussions on projects and perspectives. Many chapters include figures to illustrate concepts and ideas, and the entire text is extensively referenced. Topics in Knitting the Semantic Web include: - RDF, its expressive power, and its ability to underlie the new Library catalog card for the coming century - the value and application for controlled vocabularies - SKOS (Simple Knowledge Organization System), the newest Semantic Web language - managing scheme versioning in the Semantic Web - Physnet portal service for physics - Semantic Web technologies in biomedicine - developing the United Nations Food and Agriculture ontology - Friend Of A Friend (FOAF) vocabulary specification-with a real world case study at a university - and more Knitting the Semantic Web is a stimulating resource for professionals, researchers, educators, and students in library and information science, computer science, information architecture, Web design, and Web services.
    Content
    Enthält die Beiträge: Greenberg, J., E.M. Méndez Rodríguez: Introduction: toward a more library-like Web via semantic knitting (S.1-8). - Campbell, D.G.: The birth of the new Web: a Foucauldian reading (S.9-20). - McCathieNevile, C., E.M. Méndez Rodríguez: Library cards for the 21st century (S.21-45). - Harper, C.A., B.B. Tillett: Library of Congress controlled vocabularies and their application to the Semantic Web (S.47-68). - Miles, A., J.R. Pérez-Agüera: SKOS: Simple Knowledge Organisation for the Web (S.69-83). - Tennis, J.T.: Scheme versioning in the Semantic Web (S.85-104). - Rogers, G.P.: Roles for semantic technologies and tools in libraries (S.105-125). - Severiens, T., C. Thiemann: RDF database for PhysNet and similar portals (S.127-147). - Michon, J.: Biomedicine and the Semantic Web: a knowledge model for visual phenotype (S.149-160). - Liang, A., G. Salokhe u. M. Sini u.a.: Towards an infrastructure for semantic applications: methodologies for semantic integration of heterogeneous resources (S.161-189). - Graves, M., A. Constabaris u. D. Brickley: FOAF: connecting people on the Semantic Web (S.191-202). - Greenberg, J.: Advancing Semantic Web via library functions (S.203-225). - Weibel, S.L.: Social Bibliography: a personal perspective on libraries and the Semantic Web (S.227-236)
  8. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.03
    0.031287055 = product of:
      0.07821764 = sum of:
        0.07148097 = weight(_text_:g in 3926) [ClassicSimilarity], result of:
          0.07148097 = score(doc=3926,freq=4.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.46949342 = fieldWeight in 3926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.006736672 = weight(_text_:a in 3926) [ClassicSimilarity], result of:
          0.006736672 = score(doc=3926,freq=4.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.14413087 = fieldWeight in 3926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
      0.4 = coord(2/5)
    
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Type
    a
  9. Schreiber, G.: Principles and pragmatics of a Semantic Culture Web : tearing down walls and building bridges (2008) 0.03
    0.027654111 = product of:
      0.06913528 = sum of:
        0.06318085 = weight(_text_:g in 3764) [ClassicSimilarity], result of:
          0.06318085 = score(doc=3764,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.4149775 = fieldWeight in 3764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.078125 = fieldNorm(doc=3764)
        0.0059544328 = weight(_text_:a in 3764) [ClassicSimilarity], result of:
          0.0059544328 = score(doc=3764,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.12739488 = fieldWeight in 3764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3764)
      0.4 = coord(2/5)
    
  10. Di Maio, P.: Linked data beyond libraries : towards universal interfaces and knowledge unification (2015) 0.03
    0.0259077 = product of:
      0.06476925 = sum of:
        0.05762393 = weight(_text_:u in 2553) [ClassicSimilarity], result of:
          0.05762393 = score(doc=2553,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.43413407 = fieldWeight in 2553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=2553)
        0.0071453196 = weight(_text_:a in 2553) [ClassicSimilarity], result of:
          0.0071453196 = score(doc=2553,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.15287387 = fieldWeight in 2553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=2553)
      0.4 = coord(2/5)
    
    Source
    Linked data and user interaction: the road ahead. Eds.: Cervone, H.F. u. L.G. Svensson
    Type
    a
  11. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a large ontology from Wikipedia and WordNet (2008) 0.03
    0.025486294 = product of:
      0.06371573 = sum of:
        0.053610723 = weight(_text_:g in 3404) [ClassicSimilarity], result of:
          0.053610723 = score(doc=3404,freq=4.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.35212007 = fieldWeight in 3404, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.046875 = fieldNorm(doc=3404)
        0.010105007 = weight(_text_:a in 3404) [ClassicSimilarity], result of:
          0.010105007 = score(doc=3404,freq=16.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.2161963 = fieldWeight in 3404, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3404)
      0.4 = coord(2/5)
    
    Abstract
    This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95%-as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO's data.
    Type
    a
  12. Suchanek, F.M.; Kasneci, G.; Weikum, G.: YAGO: a core of semantic knowledge unifying WordNet and Wikipedia (2007) 0.02
    0.024944767 = product of:
      0.06236192 = sum of:
        0.053610723 = weight(_text_:g in 3403) [ClassicSimilarity], result of:
          0.053610723 = score(doc=3403,freq=4.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.35212007 = fieldWeight in 3403, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.046875 = fieldNorm(doc=3403)
        0.008751193 = weight(_text_:a in 3403) [ClassicSimilarity], result of:
          0.008751193 = score(doc=3403,freq=12.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.18723148 = fieldWeight in 3403, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3403)
      0.4 = coord(2/5)
    
    Abstract
    We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
  13. OWL Web Ontology Language Test Cases (2004) 0.02
    0.0241537 = product of:
      0.06038425 = sum of:
        0.038415954 = weight(_text_:u in 4685) [ClassicSimilarity], result of:
          0.038415954 = score(doc=4685,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.28942272 = fieldWeight in 4685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=4685)
        0.021968298 = product of:
          0.043936595 = sum of:
            0.043936595 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.043936595 = score(doc=4685,freq=2.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    14. 8.2011 13:33:22
    Editor
    Carroll, J.J. u. J. de Roo
  14. RDF Semantics (2004) 0.02
    0.022576315 = product of:
      0.056440786 = sum of:
        0.048019946 = weight(_text_:u in 3065) [ClassicSimilarity], result of:
          0.048019946 = score(doc=3065,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.3617784 = fieldWeight in 3065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=3065)
        0.00842084 = weight(_text_:a in 3065) [ClassicSimilarity], result of:
          0.00842084 = score(doc=3065,freq=4.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.18016359 = fieldWeight in 3065, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3065)
      0.4 = coord(2/5)
    
    Abstract
    This is a specification of a precise semantics, and corresponding complete systems of inference rules, for the Resource Description Framework (RDF) and RDF Schema (RDFS).
    Editor
    Hayes, P. u. B. McBride
  15. Nagenborg, M..: Privacy im Social Semantic Web (2009) 0.02
    0.021902675 = product of:
      0.054756686 = sum of:
        0.04753732 = weight(_text_:u in 4876) [ClassicSimilarity], result of:
          0.04753732 = score(doc=4876,freq=4.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.35814235 = fieldWeight in 4876, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4876)
        0.0072193667 = weight(_text_:a in 4876) [ClassicSimilarity], result of:
          0.0072193667 = score(doc=4876,freq=6.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.1544581 = fieldWeight in 4876, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4876)
      0.4 = coord(2/5)
    
    Abstract
    Der Schwerpunkt dieses Beitrages liegt auf dem Design von Infrastrukturen, welche es ermöglichen sollen, private Daten kontrolliert preiszugeben und auszutauschen. Zunächst wird daran erinnert, dass rechtliche und technische Maßnahmen zum Datenschutz stets auch dazu dienen, den Austausch von Daten zu ermöglichen. Die grundlegende Herausforderung besteht darin, der sozialen und politischen Bedeutung des Privaten Rechnung zu tragen. Privatheit wird aus der Perspektive der Informationsethik dabei als ein normatives, handlungsleitendes Konzept verstanden. Als Maßstab für die Gestaltung der entsprechenden Infrastrukturen wird auf Helen Nissenbaums Konzept der "privacy as contextual integrity" zurückgegriffen, um u. a. die Ansätze der "end-to-end information accountability" und des "Privacy Identity Management for Europe"- Projektes zu diskutieren.
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
    Type
    a
  16. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.02
    0.021896223 = product of:
      0.036493704 = sum of:
        0.019207977 = weight(_text_:u in 1626) [ClassicSimilarity], result of:
          0.019207977 = score(doc=1626,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.14471136 = fieldWeight in 1626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.0063015795 = weight(_text_:a in 1626) [ClassicSimilarity], result of:
          0.0063015795 = score(doc=1626,freq=14.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.13482209 = fieldWeight in 1626, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.010984149 = product of:
          0.021968298 = sum of:
            0.021968298 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
              0.021968298 = score(doc=1626,freq=2.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.15476047 = fieldWeight in 1626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1626)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  17. Gradmann, S.: Semantic Web und Linked Open Data (2013) 0.02
    0.021589752 = product of:
      0.05397438 = sum of:
        0.048019946 = weight(_text_:u in 716) [ClassicSimilarity], result of:
          0.048019946 = score(doc=716,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.3617784 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=716)
        0.0059544328 = weight(_text_:a in 716) [ClassicSimilarity], result of:
          0.0059544328 = score(doc=716,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.12739488 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=716)
      0.4 = coord(2/5)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
    Type
    a
  18. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.02
    0.021234386 = product of:
      0.03539064 = sum of:
        0.015795212 = weight(_text_:g in 468) [ClassicSimilarity], result of:
          0.015795212 = score(doc=468,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.10374437 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.012004986 = weight(_text_:u in 468) [ClassicSimilarity], result of:
          0.012004986 = score(doc=468,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.0904446 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.0075904424 = weight(_text_:a in 468) [ClassicSimilarity], result of:
          0.0075904424 = score(doc=468,freq=52.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.16239727 = fieldWeight in 468, product of:
              7.2111025 = tf(freq=52.0), with freq of:
                52.0 = termFreq=52.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
      0.6 = coord(3/5)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  19. Dunsire, G.: FRBR and the Semantic Web (2012) 0.02
    0.02102512 = product of:
      0.0525628 = sum of:
        0.044226594 = weight(_text_:g in 1928) [ClassicSimilarity], result of:
          0.044226594 = score(doc=1928,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.29048425 = fieldWeight in 1928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1928)
        0.008336206 = weight(_text_:a in 1928) [ClassicSimilarity], result of:
          0.008336206 = score(doc=1928,freq=8.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.17835285 = fieldWeight in 1928, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1928)
      0.4 = coord(2/5)
    
    Abstract
    Each of the FR family of models has been represented in Resource Description Framework (RDF), the basis of the Semantic Web. This has involved analysis of the entity-relationship diagrams and text of the models to identify and create the RDF classes, properties, definitions and scope notes required. The work has shown that it is possible to seamlessly connect the models within a semantic framework, specifically in the treatment of names, identifiers, and subjects, and link the RDF elements to those in related namespaces.
    Content
    Contribution to a special issue "The FRBR family of conceptual models: toward a linked future"
    Type
    a
  20. Willer, M.; Dunsire, G.: ISBD, the UNIMARC bibliographic format, and RDA : interoperability issues in namespaces and the linked data environment (2014) 0.02
    0.02102512 = product of:
      0.0525628 = sum of:
        0.044226594 = weight(_text_:g in 1999) [ClassicSimilarity], result of:
          0.044226594 = score(doc=1999,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.29048425 = fieldWeight in 1999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1999)
        0.008336206 = weight(_text_:a in 1999) [ClassicSimilarity], result of:
          0.008336206 = score(doc=1999,freq=8.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.17835285 = fieldWeight in 1999, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1999)
      0.4 = coord(2/5)
    
    Abstract
    The article is an updated and expanded version of a paper presented to International Federation of Library Associations and Institutions in 2013. It describes recent work involving the representation of International Standard for Bibliographic Description (ISBD) and UNIMARC (UNIversal MARC) in Resource Description Framework (RDF), the basis of the Semantic Web and linked data. The UNIMARC Bibliographic format is used to illustrate issues arising from the development of a bibliographic element set and its semantic alignment with ISBD. The article discusses the use of such alignments in the automated processing of linked data for interoperability, using examples from ISBD, UNIMARC, and Resource Description and Access.
    Footnote
    Contribution in a special issue "ISBD: The Bibliographic Content Standard "
    Type
    a

Years

Languages

  • e 243
  • d 70
  • f 1
  • More… Less…

Types

  • a 213
  • el 83
  • m 43
  • s 17
  • n 10
  • x 6
  • r 2
  • More… Less…

Subjects

Classifications