Search (175 results, page 1 of 9)

  • × theme_ss:"Semantic Web"
  1. Willer, M.; Dunsire, G.: Bibliographic information organization in the Semantic Web (2013) 0.16
    0.16044371 = product of:
      0.26740617 = sum of:
        0.13287885 = weight(_text_:readable in 2143) [ClassicSimilarity], result of:
          0.13287885 = score(doc=2143,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.47999436 = fieldWeight in 2143, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2143)
        0.10670193 = weight(_text_:bibliographic in 2143) [ClassicSimilarity], result of:
          0.10670193 = score(doc=2143,freq=16.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.6082881 = fieldWeight in 2143, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2143)
        0.027825395 = product of:
          0.05565079 = sum of:
            0.05565079 = weight(_text_:data in 2143) [ClassicSimilarity], result of:
              0.05565079 = score(doc=2143,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.39059696 = fieldWeight in 2143, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2143)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    New technologies will underpin the future generation of library catalogues. To facilitate their role providing information, serving users, and fulfilling their mission as cultural heritage and memory institutions, libraries must take a technological leap; their standards and services must be transformed to those of the Semantic Web. Bibliographic Information Organization in the Semantic Web explores the technologies that may power future library catalogues, and argues the necessity of such a leap. The text introduces international bibliographic standards and models, and fundamental concepts in their representation in the context of the Semantic Web. Subsequent chapters cover bibliographic information organization, linked open data, methodologies for publishing library metadata, discussion of the wider environment (museum, archival and publishing communities) and users, followed by a conclusion.
    LCSH
    Machine / readable bibliographic data
    RSWK
    Bibliografische Daten / Informationsmanagement / Semantic Web / Functional Requirements for Bibliographic Records
    Bibliografische Daten / Semantic Web / Metadaten / Linked Data
    Subject
    Bibliografische Daten / Informationsmanagement / Semantic Web / Functional Requirements for Bibliographic Records
    Bibliografische Daten / Semantic Web / Metadaten / Linked Data
    Machine / readable bibliographic data
  2. Piscitelli, F.A.: Library linked data models : library data in the Semantic Web (2019) 0.10
    0.10262125 = product of:
      0.17103541 = sum of:
        0.09395953 = weight(_text_:readable in 5478) [ClassicSimilarity], result of:
          0.09395953 = score(doc=5478,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.03772483 = weight(_text_:bibliographic in 5478) [ClassicSimilarity], result of:
          0.03772483 = score(doc=5478,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.039351046 = product of:
          0.07870209 = sum of:
            0.07870209 = weight(_text_:data in 5478) [ClassicSimilarity], result of:
              0.07870209 = score(doc=5478,freq=20.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5523875 = fieldWeight in 5478, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5478)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This exploratory study examined Linked Data (LD) schemas/ontologies and data models proposed or in use by libraries around the world using MAchine Readable Cataloging (MARC) as a basis for comparison of the scope and extensibility of these potential new standards. The researchers selected 14 libraries from national libraries, academic libraries, government libraries, public libraries, multi-national libraries, and cultural heritage centers currently developing Library Linked Data (LLD) schemas. The choices of models, schemas, and elements used in each library's LD can create interoperability issues for LD services because of substantial differences between schemas and data models evolving via local decisions. The researchers observed that a wide variety of vocabularies and ontologies were used for LLD including common web schemas such as Dublin Core (DC)/DCTerms, Schema.org and Resource Description Framework (RDF), as well as deprecated schemas such as MarcOnt and rdagroup1elements. A sharp divide existed as well between LLD schemas using variations of the Functional Requirements for Bibliographic Records (FRBR) data model and those with different data models or even with no listed data model. Libraries worldwide are not using the same elements or even the same ontologies, schemas and data models to describe the same materials using the same general concepts.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.08
    0.07633529 = product of:
      0.19083822 = sum of:
        0.047709554 = product of:
          0.14312866 = sum of:
            0.14312866 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14312866 = score(doc=701,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14312866 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14312866 = score(doc=701,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Willer, M.; Dunsire, G.: ISBD, the UNIMARC bibliographic format, and RDA : interoperability issues in namespaces and the linked data environment (2014) 0.06
    0.059308898 = product of:
      0.14827225 = sum of:
        0.11809741 = weight(_text_:bibliographic in 1999) [ClassicSimilarity], result of:
          0.11809741 = score(doc=1999,freq=10.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.6732516 = fieldWeight in 1999, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1999)
        0.03017484 = product of:
          0.06034968 = sum of:
            0.06034968 = weight(_text_:data in 1999) [ClassicSimilarity], result of:
              0.06034968 = score(doc=1999,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.42357713 = fieldWeight in 1999, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1999)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The article is an updated and expanded version of a paper presented to International Federation of Library Associations and Institutions in 2013. It describes recent work involving the representation of International Standard for Bibliographic Description (ISBD) and UNIMARC (UNIversal MARC) in Resource Description Framework (RDF), the basis of the Semantic Web and linked data. The UNIMARC Bibliographic format is used to illustrate issues arising from the development of a bibliographic element set and its semantic alignment with ISBD. The article discusses the use of such alignments in the automated processing of linked data for interoperability, using examples from ISBD, UNIMARC, and Resource Description and Access.
    Footnote
    Contribution in a special issue "ISBD: The Bibliographic Content Standard "
  5. OWL Web Ontology Language Guide (2004) 0.06
    0.058129102 = product of:
      0.14532275 = sum of:
        0.13287885 = weight(_text_:readable in 4687) [ClassicSimilarity], result of:
          0.13287885 = score(doc=4687,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.47999436 = fieldWeight in 4687, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4687)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 4687) [ClassicSimilarity], result of:
              0.024887787 = score(doc=4687,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 4687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4687)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
  6. Bianchini, C.; Willer, M.: ISBD resource and Its description in the context of the Semantic Web (2014) 0.06
    0.05783403 = product of:
      0.14458507 = sum of:
        0.105629526 = weight(_text_:bibliographic in 1998) [ClassicSimilarity], result of:
          0.105629526 = score(doc=1998,freq=8.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.6021745 = fieldWeight in 1998, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1998)
        0.03895555 = product of:
          0.0779111 = sum of:
            0.0779111 = weight(_text_:data in 1998) [ClassicSimilarity], result of:
              0.0779111 = score(doc=1998,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5468357 = fieldWeight in 1998, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1998)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article explores the question "What is an International Standard for Bibliographic Description (ISBD) resource in the context of the Semantic Web, and what is the relationship of its description to the linked data?" This question is discussed against the background of the dichotomy between the description and access using the Semantic Web differentiation of the three logical layers: real-world objects, web of data, and special purpose (bibliographic) data. The representation of bibliographic data as linked data is discussed, distinguishing the description of a resource from the iconic/objective and the informational/subjective viewpoints. In the conclusion, the authors give views on possible directions of future development of the ISBD.
    Footnote
    Contribution in a special issue "ISBD: The Bibliographic Content Standard "
  7. Bizer, C.; Lehmann, J.; Kobilarov, G.; Auer, S.; Becker, C.; Cyganiak, R.; Hellmann, S.: DBpedia: a crystallization point for the Web of Data. (2009) 0.05
    0.052516486 = product of:
      0.13129121 = sum of:
        0.09395953 = weight(_text_:readable in 1643) [ClassicSimilarity], result of:
          0.09395953 = score(doc=1643,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 1643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1643)
        0.03733168 = product of:
          0.07466336 = sum of:
            0.07466336 = weight(_text_:data in 1643) [ClassicSimilarity], result of:
              0.07466336 = score(doc=1643,freq=18.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.52404076 = fieldWeight in 1643, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1643)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains suc as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.
  8. Coyle, K.: Understanding the Semantic Web : bibliographic data and metadata (2010) 0.05
    0.048161972 = product of:
      0.12040493 = sum of:
        0.09053959 = weight(_text_:bibliographic in 4169) [ClassicSimilarity], result of:
          0.09053959 = score(doc=4169,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.5161496 = fieldWeight in 4169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=4169)
        0.029865343 = product of:
          0.059730686 = sum of:
            0.059730686 = weight(_text_:data in 4169) [ClassicSimilarity], result of:
              0.059730686 = score(doc=4169,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4192326 = fieldWeight in 4169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4169)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
  9. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.04
    0.044705484 = product of:
      0.11176371 = sum of:
        0.045269795 = weight(_text_:bibliographic in 2556) [ClassicSimilarity], result of:
          0.045269795 = score(doc=2556,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.06649391 = sum of:
          0.029865343 = weight(_text_:data in 2556) [ClassicSimilarity], result of:
            0.029865343 = score(doc=2556,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2096163 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
          0.036628567 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
            0.036628567 = score(doc=2556,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
      0.4 = coord(2/5)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
  10. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.04
    0.04256137 = product of:
      0.106403425 = sum of:
        0.09395953 = weight(_text_:readable in 3297) [ClassicSimilarity], result of:
          0.09395953 = score(doc=3297,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 3297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 3297) [ClassicSimilarity], result of:
              0.024887787 = score(doc=3297,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 3297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3297)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
  11. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.04
    0.03696416 = product of:
      0.09241039 = sum of:
        0.075167626 = weight(_text_:readable in 4709) [ClassicSimilarity], result of:
          0.075167626 = score(doc=4709,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.2715258 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
        0.017242765 = product of:
          0.03448553 = sum of:
            0.03448553 = weight(_text_:data in 4709) [ClassicSimilarity], result of:
              0.03448553 = score(doc=4709,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24204408 = fieldWeight in 4709, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  12. Rüther, M.; Fock, J.; Schultz-Krutisch, T.; Bandholtz, T.: Classification and reference vocabulary in linked environment data (2011) 0.04
    0.035954125 = product of:
      0.08988531 = sum of:
        0.06402116 = weight(_text_:bibliographic in 4816) [ClassicSimilarity], result of:
          0.06402116 = score(doc=4816,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.3649729 = fieldWeight in 4816, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=4816)
        0.025864149 = product of:
          0.051728297 = sum of:
            0.051728297 = weight(_text_:data in 4816) [ClassicSimilarity], result of:
              0.051728297 = score(doc=4816,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3630661 = fieldWeight in 4816, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4816)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Federal Environment Agency (UBA), Germany, has a long tradition in knowledge organization, using a library along with many Web-based information systems. The backbone of this information space is a classification system enhanced by a reference vocabulary which consists of a thesaurus, a gazetteer and a chronicle. Over the years, classification has increasingly been relegated to the background compared with the reference vocabulary indexing and full text search. Bibliographic items are no longer classified directly but tagged with thesaurus terms, with those terms being classified. Since 2010 we have been developing a linked data representation of this knowledge base. While we are linking bibliographic and observation data with the controlled vocabulary in a Resource Desrcription Framework (RDF) representation, the classification may be revisited as a powerful organization system by inference. This also raises questions about the quality and feasibility of an unambiguous classification of thesaurus terms.
  13. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.03
    0.033142954 = product of:
      0.082857385 = sum of:
        0.030179864 = weight(_text_:bibliographic in 4796) [ClassicSimilarity], result of:
          0.030179864 = score(doc=4796,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.17204987 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
        0.05267752 = product of:
          0.10535504 = sum of:
            0.10535504 = weight(_text_:data in 4796) [ClassicSimilarity], result of:
              0.10535504 = score(doc=4796,freq=56.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.7394569 = fieldWeight in 4796, product of:
                  7.483315 = tf(freq=56.0), with freq of:
                    56.0 = termFreq=56.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4796)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  14. Zhang, L.: Linking information through function (2014) 0.02
    0.024080986 = product of:
      0.060202464 = sum of:
        0.045269795 = weight(_text_:bibliographic in 1526) [ClassicSimilarity], result of:
          0.045269795 = score(doc=1526,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 1526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1526)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 1526) [ClassicSimilarity], result of:
              0.029865343 = score(doc=1526,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 1526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1526)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    How information resources can be meaningfully related has been addressed in contexts from bibliographic entries to hyperlinks and, more recently, linked data. The genre structure and relationships among genre structure constituents shed new light on organizing information by purpose or function. This study examines the relationships among a set of functional units previously constructed in a taxonomy, each of which is a chunk of information embedded in a document and is distinct in terms of its communicative function. Through a card-sort study, relationships among functional units were identified with regard to their occurrence and function. The findings suggest that a group of functional units can be identified, collocated, and navigated by particular relationships. Understanding how functional units are related to each other is significant in linking information pieces in documents to support finding, aggregating, and navigating information in a distributed information environment.
  15. LeBoeuf, P.: ¬A strange model named FRBRoo (2012) 0.02
    0.024080986 = product of:
      0.060202464 = sum of:
        0.045269795 = weight(_text_:bibliographic in 1904) [ClassicSimilarity], result of:
          0.045269795 = score(doc=1904,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 1904, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1904)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 1904) [ClassicSimilarity], result of:
              0.029865343 = score(doc=1904,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 1904, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1904)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Libraries and museums developed rules for the description of their collections prior to formalizing the underlying conceptualization reflected in such rules. That formalizing process took place in the 1990s and resulted in two independent conceptual models: FRBR for bibliographic information (published in 1998), and CIDOC CRM for museum information (developed from 1996 on, and issued as ISO standard 21127 in 2006). An international working group was formed in 2003 with the purpose of harmonizing these two models. The resulting model, FRBROO, was published in 2009. It is an extension to CIDOC CRM, using the formalism in which the former is written. It adds to FRBR the dynamic aspects of CIDOC CRM, and a number of refinements (e.g. in the definitions of Work and Manifestation). Some modifications were made in CIDOC CRM as well. FRBROO was developed with Semantic Web technologies in mind, and lends itself well to the Linked Data environment; but will it be used in that context?
  16. Bruhn, C.; Syn, S.Y.: Pragmatic thought as a philosophical foundation for collaborative tagging and the Semantic Web (2018) 0.02
    0.023711314 = product of:
      0.059278287 = sum of:
        0.03772483 = weight(_text_:bibliographic in 4245) [ClassicSimilarity], result of:
          0.03772483 = score(doc=4245,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 4245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4245)
        0.021553457 = product of:
          0.043106914 = sum of:
            0.043106914 = weight(_text_:data in 4245) [ClassicSimilarity], result of:
              0.043106914 = score(doc=4245,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.30255508 = fieldWeight in 4245, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4245)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The purpose of this paper is to use ideas drawn from two founders of American pragmatism, William James and Charles Sanders Peirce, in order to propose a philosophical foundation that supports the value of collaborative tagging and reinforces the structure and goals of the Semantic Web. Design/methodology/approach The study employs a close analysis of key literature by James and Peirce to answer recent calls for a philosophy of the Web and to respond to research in the LIS literature that has assessed the value and limitations of folksonomy. Moreover, pragmatic views are applied to illustrate the relationships among collaborative tagging, linked data, and the Semantic Web. Findings With a philosophical foundation in place, the study highlights the value of the minority tags that fall within the so-called "long tail" of the power law graph, and the importance of granting sufficient time for the full value of folksonomy to be revealed. The discussion goes further to explore how "collaborative tagging" could evolve into "collaborative knowledge" in the form of linked data. Specifically, Peirce's triadic architectonic is shown to foster an understanding of the construction of linked data through the functional requirements for bibliographic records entity-relation model and resource description framework triples, and James's image of the multiverse anticipates the goals Tim Berners-Lee has articulated for the Semantic Web. Originality/value This study is unique in using Jamesian and Peircean thinking to argue for the value of folksonomy and to suggest implications for the Semantic Web.
  17. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.02
    0.023102598 = product of:
      0.057756495 = sum of:
        0.046979766 = weight(_text_:readable in 468) [ClassicSimilarity], result of:
          0.046979766 = score(doc=468,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.16970363 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.010776728 = product of:
          0.021553457 = sum of:
            0.021553457 = weight(_text_:data in 468) [ClassicSimilarity], result of:
              0.021553457 = score(doc=468,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.15127754 = fieldWeight in 468, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
  18. Koutsomitropoulos, D.A.; Solomou, G.D.; Alexopoulos, A.D.; Papatheodorou, T.S.: Semantic metadata interoperability and inference-based querying in digital repositories (2009) 0.02
    0.022550289 = product of:
      0.11275144 = sum of:
        0.11275144 = weight(_text_:readable in 3731) [ClassicSimilarity], result of:
          0.11275144 = score(doc=3731,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 3731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=3731)
      0.2 = coord(1/5)
    
    Abstract
    Metadata applications have evolved in time into highly structured "islands of information" about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this article we take upon the well-established Dublin Core metadata standard as well as other metadata schemata, which often appear in digital repositories set-ups, and suggest a proper Semantic Web OWL ontology. In this process the authors cope with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual metadata records. The authors conclude by presenting a working prototype that provides for inference-based querying on top of digital repositories.
  19. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.02
    0.022483826 = product of:
      0.11241913 = sum of:
        0.11241913 = sum of:
          0.0696858 = weight(_text_:data in 2640) [ClassicSimilarity], result of:
            0.0696858 = score(doc=2640,freq=8.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.48910472 = fieldWeight in 2640, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2640)
          0.04273333 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
            0.04273333 = score(doc=2640,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.2708308 = fieldWeight in 2640, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2640)
      0.2 = coord(1/5)
    
    Abstract
    Library catalogues contain an enormous amount of structured, high-quality data, however, this data is generally not made available to semantic web applications. In this paper we describe the tools and techniques used to make the Swedish Union Catalogue (LIBRIS) part of the Semantic Web and Linked Data. The focus is on links to and between resources and the mechanisms used to make data available, rather than perfect description of the individual resources. We also present a method of creating links between records of the same work.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  20. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.02
    0.021956686 = product of:
      0.109783426 = sum of:
        0.109783426 = sum of:
          0.07315486 = weight(_text_:data in 2024) [ClassicSimilarity], result of:
            0.07315486 = score(doc=2024,freq=12.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.513453 = fieldWeight in 2024, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.036628567 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.036628567 = score(doc=2024,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.2 = coord(1/5)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22

Years

Languages

  • e 151
  • d 23

Types

  • a 107
  • el 44
  • m 38
  • s 15
  • n 5
  • x 4
  • r 1
  • More… Less…

Subjects