Search (92 results, page 1 of 5)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.09954323 = product of:
      0.19908646 = sum of:
        0.049771614 = product of:
          0.14931484 = sum of:
            0.14931484 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14931484 = score(doc=701,freq=2.0), product of:
                0.39851433 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04700564 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14931484 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14931484 = score(doc=701,freq=2.0), product of:
            0.39851433 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04700564 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.07
    0.06942842 = product of:
      0.13885684 = sum of:
        0.06904093 = weight(_text_:subject in 2556) [ClassicSimilarity], result of:
          0.06904093 = score(doc=2556,freq=6.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.41066417 = fieldWeight in 2556, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.06981591 = sum of:
          0.031604223 = weight(_text_:classification in 2556) [ClassicSimilarity], result of:
            0.031604223 = score(doc=2556,freq=2.0), product of:
              0.14969917 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.04700564 = queryNorm
              0.21111822 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
          0.03821169 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
            0.03821169 = score(doc=2556,freq=2.0), product of:
              0.16460574 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04700564 = queryNorm
              0.23214069 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
      0.5 = coord(2/4)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
  3. Svensson, L.G.: Unified access : a semantic Web based model for multilingual navigation in heterogeneous data sources (2008) 0.05
    0.05103458 = product of:
      0.10206916 = sum of:
        0.0797216 = weight(_text_:subject in 2191) [ClassicSimilarity], result of:
          0.0797216 = score(doc=2191,freq=8.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.4741941 = fieldWeight in 2191, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2191)
        0.02234756 = product of:
          0.04469512 = sum of:
            0.04469512 = weight(_text_:classification in 2191) [ClassicSimilarity], result of:
              0.04469512 = score(doc=2191,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.29856625 = fieldWeight in 2191, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Most online library catalogues are not well equipped for subject search. On the one hand it is difficult to navigate the structures of the thesauri and classification systems used for indexing. Further, there is little or no support for the integration of crosswalks between different controlled vocabularies, so that a subject search query formulated using one controlled vocabulary will not find resources indexed with another knowledge organisation system even if there exists a crosswalk between them. In this paper we will look at SemanticWeb technologies and a prototype system leveraging those technologies in order to enhance the subject search possibilities in heterogeneously indexed repositories. Finally, we will have a brief look at different initiatives aimed at integrating library data into the SemanticWeb.
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a
  4. Miles, A.; Pérez-Agüera, J.R.: SKOS: Simple Knowledge Organisation for the Web (2006) 0.05
    0.04884935 = product of:
      0.0976987 = sum of:
        0.06576697 = weight(_text_:subject in 504) [ClassicSimilarity], result of:
          0.06576697 = score(doc=504,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.3911902 = fieldWeight in 504, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=504)
        0.031931736 = product of:
          0.06386347 = sum of:
            0.06386347 = weight(_text_:classification in 504) [ClassicSimilarity], result of:
              0.06386347 = score(doc=504,freq=6.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.42661208 = fieldWeight in 504, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=504)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article introduces the Simple Knowledge Organisation System (SKOS), a Semantic Web language for representing controlled structured vocabularies, including thesauri, classification schemes, subject heading systems and taxonomies. SKOS provides a framework for publishing thesauri, classification schemes, and subject indexes on the Web, and for applying these systems to resource collections that are part of the SemanticWeb. SemanticWeb applications may harvest and merge SKOS data, to integrate and enhances retrieval service across multiple collections (e.g. libraries). This article also describes some alternatives for integrating Semantic Web services based on the Resource Description Framework (RDF) and SKOS into a distributed enterprise architecture.
    Source
    Cataloging and classification quarterly. 43(2006) nos.3/4, S.69-83
  5. Ilik, V.: Distributed person data : using Semantic Web compliant data in subject name headings (2015) 0.04
    0.042528816 = product of:
      0.08505763 = sum of:
        0.06643467 = weight(_text_:subject in 2292) [ClassicSimilarity], result of:
          0.06643467 = score(doc=2292,freq=8.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.39516178 = fieldWeight in 2292, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2292)
        0.018622966 = product of:
          0.037245933 = sum of:
            0.037245933 = weight(_text_:classification in 2292) [ClassicSimilarity], result of:
              0.037245933 = score(doc=2292,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24880521 = fieldWeight in 2292, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2292)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Providing efficient access to information is a crucial library mission. Subject classification is one of the major pillars that guarantees the accessibility of records in libraries. In this paper we discuss the need to associate person IDs and URIs with subjects when a named person happens to be the subject of the document. This is often the case with biographies, schools of thought in philosophy, politics, art, and literary criticism. Using Semantic Web compliant data in subject name headings enhances the ability to collocate topics about a person. Also, in retrieval, books about a person would be easily linked to works by that same person. In the context of the Semantic Web, it is expected that, as the available information grows, one would be more effective in the task of information retrieval. Information about a person or, as in the case of this paper, about a researcher exist in various databases, which can be discipline specific or publishers' databases, and in such cases they have an assigned identifier. They also exist in institutional directory databases. We argue that these various databases can be leveraged to support improved discoverability and retrieval of research output for individual authors and institutions, as well as works about those authors.
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
  6. Panzer, M.: Taxonomies as resources identification, location and access of a »Webified« Dewey (2008) 0.04
    0.03628821 = product of:
      0.07257642 = sum of:
        0.046504267 = weight(_text_:subject in 5471) [ClassicSimilarity], result of:
          0.046504267 = score(doc=5471,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.27661324 = fieldWeight in 5471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5471)
        0.026072152 = product of:
          0.052144304 = sum of:
            0.052144304 = weight(_text_:classification in 5471) [ClassicSimilarity], result of:
              0.052144304 = score(doc=5471,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.34832728 = fieldWeight in 5471, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5471)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The paper outlines the first steps in an initiative to weave the Dewey Decimal Classification (DDC) as a resource into the fabric of the Web. In order for DDC web services to not only being »on« the Web, but rather a part of it, Dewey has to be available under the same rules as other information resources. The process of URI design for identified resources is described and a draft URI template is presented. In addition, basic semantic principles of RESTful web service architecture are discussed, and their appropriateness for making a large-scale knowledge organization system (KOS) like the DDC more congenial for Semantic Web applications is evaluated.
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a
  7. Waltinger, U.; Mehler, A.; Lösch, M.; Horstmann, W.: Hierarchical classification of OAI metadata using the DDC taxonomy (2011) 0.03
    0.03489239 = product of:
      0.06978478 = sum of:
        0.046976402 = weight(_text_:subject in 4841) [ClassicSimilarity], result of:
          0.046976402 = score(doc=4841,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.27942157 = fieldWeight in 4841, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4841)
        0.022808382 = product of:
          0.045616765 = sum of:
            0.045616765 = weight(_text_:classification in 4841) [ClassicSimilarity], result of:
              0.045616765 = score(doc=4841,freq=6.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.3047229 = fieldWeight in 4841, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4841)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In the area of digital library services, the access to subject-specific metadata of scholarly publications is of utmost interest. One of the most prevalent approaches for metadata exchange is the XML-based Open Archive Initiative (OAI) Protocol for Metadata Harvesting (OAI-PMH). However, due to its loose requirements regarding metadata content there is no strict standard for consistent subject indexing specified, which is furthermore needed in the digital library domain. This contribution addresses the problem of automatic enhancement of OAI metadata by means of the most widely used universal classification schemes in libraries-the Dewey Decimal Classification (DDC). To be more specific, we automatically classify scientific documents according to the DDC taxonomy within three levels using a machine learning-based classifier that relies solely on OAI metadata records as the document representation. The results show an asymmetric distribution of documents across the hierarchical structure of the DDC taxonomy and issues of data sparseness. However, the performance of the classifier shows promising results on all three levels of the DDC.
  8. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.03
    0.03308688 = product of:
      0.06617376 = sum of:
        0.056955863 = weight(_text_:subject in 3062) [ClassicSimilarity], result of:
          0.056955863 = score(doc=3062,freq=12.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.33878064 = fieldWeight in 3062, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.009217897 = product of:
          0.018435795 = sum of:
            0.018435795 = weight(_text_:classification in 3062) [ClassicSimilarity], result of:
              0.018435795 = score(doc=3062,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.12315229 = fieldWeight in 3062, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  9. Harper, C.A.; Tillett, B.B.: Library of Congress controlled vocabularies and their application to the Semantic Web (2006) 0.03
    0.031104181 = product of:
      0.062208362 = sum of:
        0.0398608 = weight(_text_:subject in 242) [ClassicSimilarity], result of:
          0.0398608 = score(doc=242,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=242)
        0.02234756 = product of:
          0.04469512 = sum of:
            0.04469512 = weight(_text_:classification in 242) [ClassicSimilarity], result of:
              0.04469512 = score(doc=242,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.29856625 = fieldWeight in 242, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=242)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article discusses how various controlled vocabularies, classification schemes and thesauri can serve as some of the building blocks of the Semantic Web. These vocabularies have been developed over the course of decades, and can be put to great use in the development of robust web services and Semantic Web technologies. The article covers how initial collaboration between the Semantic Web, Library and Metadata communities are creating partnerships to complete work in this area. It then discusses some cores principles of authority control before talking more specifically about subject and genre vocabularies and name authority. It is hoped that future systems for internationally shared authority data will link the world's authority data from trusted sources to benefit users worldwide. Finally, the article looks at how encoding and markup of vocabularies can help ensure compatibility with the current and future state of Semantic Web development and provides examples of how this work can help improve the findability and navigation of information on the World Wide Web.
    Source
    Cataloging and classification quarterly. 43(2006) nos.3/4, S.47-68
  10. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.03
    0.029483322 = product of:
      0.058966644 = sum of:
        0.0398608 = weight(_text_:subject in 662) [ClassicSimilarity], result of:
          0.0398608 = score(doc=662,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=662)
        0.019105844 = product of:
          0.03821169 = sum of:
            0.03821169 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.03821169 = score(doc=662,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner.
    Date
    22. 3.2013 19:29:20
  11. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.03
    0.029483322 = product of:
      0.058966644 = sum of:
        0.0398608 = weight(_text_:subject in 3197) [ClassicSimilarity], result of:
          0.0398608 = score(doc=3197,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 3197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=3197)
        0.019105844 = product of:
          0.03821169 = sum of:
            0.03821169 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
              0.03821169 = score(doc=3197,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.23214069 = fieldWeight in 3197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3197)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
  12. Tillett, B.B.: AACR2 and metadata : library opportunities in the global semantic Web (2003) 0.03
    0.027831456 = product of:
      0.05566291 = sum of:
        0.0398608 = weight(_text_:subject in 5510) [ClassicSimilarity], result of:
          0.0398608 = score(doc=5510,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 5510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=5510)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 5510) [ClassicSimilarity], result of:
              0.031604223 = score(doc=5510,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 5510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5510)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Explores the opportunities for libraries to contribute to the proposed global "Semantic Web." Library name and subject authority files, including work that IFLA has done related to a new view of "Universal Bibliographic Control" in the Internet environment and the work underway in the U.S. and Europe, are making a reality of the virtual international authority file on the Web. The bibliographic and authority records created according to AACR2 reflect standards for metadata that libraries have provided for years. New opportunities for using these records in the digital world are described (interoperability), including mapping with Dublin Core metadata. AACR2 recently updated Chapter 9 on Electronic Resources. That process and highlights of the changes are described, including Library of Congress' rule interpretations.
    Source
    Cataloging and classification quarterly. 36(2003) nos.3/4, S.101-119
  13. SKOS Simple Knowledge Organization System Reference : W3C Recommendation 18 August 2009 (2009) 0.03
    0.027831456 = product of:
      0.05566291 = sum of:
        0.0398608 = weight(_text_:subject in 4688) [ClassicSimilarity], result of:
          0.0398608 = score(doc=4688,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 4688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4688)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 4688) [ClassicSimilarity], result of:
              0.031604223 = score(doc=4688,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 4688, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4688)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This document defines the Simple Knowledge Organization System (SKOS), a common data model for sharing and linking knowledge organization systems via the Web. Many knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems, share a similar structure, and are used in similar applications. SKOS captures much of this similarity and makes it explicit, to enable data and technology sharing across diverse applications. The SKOS data model provides a standard, low-cost migration path for porting existing knowledge organization systems to the Semantic Web. SKOS also provides a lightweight, intuitive language for developing and sharing new knowledge organization systems. It may be used on its own, or in combination with formal knowledge representation languages such as the Web Ontology language (OWL). This document is the normative specification of the Simple Knowledge Organization System. It is intended for readers who are involved in the design and implementation of information systems, and who already have a good understanding of Semantic Web technology, especially RDF and OWL. For an informative guide to using SKOS, see the [SKOS-PRIMER].
  14. SKOS Simple Knowledge Organization System Primer (2009) 0.03
    0.027831456 = product of:
      0.05566291 = sum of:
        0.0398608 = weight(_text_:subject in 4795) [ClassicSimilarity], result of:
          0.0398608 = score(doc=4795,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 4795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4795)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 4795) [ClassicSimilarity], result of:
              0.031604223 = score(doc=4795,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 4795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4795)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    SKOS (Simple Knowledge Organisation System) provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other types of controlled vocabulary. As an application of the Resource Description Framework (RDF) SKOS allows concepts to be documented, linked and merged with other data, while still being composed, integrated and published on the World Wide Web. This document is an implementors guide for those who would like to represent their concept scheme using SKOS. In basic SKOS, conceptual resources (concepts) can be identified using URIs, labelled with strings in one or more natural languages, documented with various types of notes, semantically related to each other in informal hierarchies and association networks, and aggregated into distinct concept schemes. In advanced SKOS, conceptual resources can be mapped to conceptual resources in other schemes and grouped into labelled or ordered collections. Concept labels can also be related to each other. Finally, the SKOS vocabulary itself can be extended to suit the needs of particular communities of practice.
  15. Campbell, D.G.: Derrida, logocentrism, and the concept of warrant on the Semantic Web (2008) 0.02
    0.023192879 = product of:
      0.046385758 = sum of:
        0.033217333 = weight(_text_:subject in 2507) [ClassicSimilarity], result of:
          0.033217333 = score(doc=2507,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.19758089 = fieldWeight in 2507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2507)
        0.013168425 = product of:
          0.02633685 = sum of:
            0.02633685 = weight(_text_:classification in 2507) [ClassicSimilarity], result of:
              0.02633685 = score(doc=2507,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.17593184 = fieldWeight in 2507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2507)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    The highly-structured data standards of the Semantic Web contain a promising venue for the migration of library subject access standards onto the World Wide Web. The new functionalities of the Web, however, along with the anticipated capabilities of intelligent Web agents, suggest that information on the Semantic Web will have much more flexibility, diversity and mutability. We need, therefore, a method for recognizing and assessing the principles whereby Semantic Web information can combine together in productive and useful ways. This paper will argue that the concept of warrant in traditional library science, can provide a useful means of translating library knowledge structures into Web-based knowledge structures. Using Derrida's concept of logocentrism, this paper suggests that what while "warrant" in library science traditionally alludes to the principles by which concepts are admitted into the design of a classification or access system, "warrant" on the Semantic Web alludes to the principles by which Web resources can be admitted into a network of information uses. Furthermore, library information practice suggests a far more complex network of warrant concepts that provide a subtlety and richness to knowledge organization that the Semantic Web has not yet attained.
  16. Isaac, A.; Schlobach, S.; Matthezing, H.; Zinn, C.: Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies (2008) 0.02
    0.018554304 = product of:
      0.037108608 = sum of:
        0.026573867 = weight(_text_:subject in 3398) [ClassicSimilarity], result of:
          0.026573867 = score(doc=3398,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.15806471 = fieldWeight in 3398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=3398)
        0.01053474 = product of:
          0.02106948 = sum of:
            0.02106948 = weight(_text_:classification in 3398) [ClassicSimilarity], result of:
              0.02106948 = score(doc=3398,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.14074548 = fieldWeight in 3398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3398)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - To show how semantic web techniques can help address semantic interoperability issues in the broad cultural heritage domain, allowing users an integrated and seamless access to heterogeneous collections. Design/methodology/approach - This paper presents the heterogeneity problems to be solved. It introduces semantic web techniques that can help in solving them, focusing on the representation of controlled vocabularies and their semantic alignment. It gives pointers to some previous projects and experiments that have tried to address the problems discussed. Findings - Semantic web research provides practical technical and methodological approaches to tackle the different issues. Two contributions of interest are the simple knowledge organisation system model and automatic vocabulary alignment methods and tools. These contributions were demonstrated to be usable for enabling semantic search and navigation across collections. Research limitations/implications - The research aims at designing different representation and alignment methods for solving interoperability problems in the context of controlled subject vocabularies. Given the variety and technical richness of current research in the semantic web field, it is impossible to provide an in-depth account or an exhaustive list of references. Every aspect of the paper is, however, given one or several pointers for further reading. Originality/value - This article provides a general and practical introduction to relevant semantic web techniques. It is of specific value for the practitioners in the cultural heritage and digital library domains who are interested in applying these methods in practice.
    Content
    This paper is based on a talk given at "Information Access for the Global Community, An International Seminar on the Universal Decimal Classification" held on 4-5 June 2007 in The Hague, The Netherlands. An abstract of this talk will be published in Extensions and Corrections to the UDC, an annual publication of the UDC consortium. Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  17. Panzer, M.: Relationships, spaces, and the two faces of Dewey (2008) 0.02
    0.017866256 = product of:
      0.03573251 = sum of:
        0.0199304 = weight(_text_:subject in 2127) [ClassicSimilarity], result of:
          0.0199304 = score(doc=2127,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.11854853 = fieldWeight in 2127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2127)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 2127) [ClassicSimilarity], result of:
              0.031604223 = score(doc=2127,freq=8.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 2127, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2127)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    "When dealing with a large-scale and widely-used knowledge organization system like the Dewey Decimal Classification, we often tend to focus solely on the organization aspect, which is closely intertwined with editorial work. This is perfectly understandable, since developing and updating the DDC, keeping up with current scientific developments, spotting new trends in both scholarly communication and popular publishing, and figuring out how to fit those patterns into the structure of the scheme are as intriguing as they are challenging. From the organization perspective, the intended user of the scheme is mainly the classifier. Dewey acts very much as a number-building engine, providing richly documented concepts to help with classification decisions. Since the Middle Ages, quasi-religious battles have been fought over the "valid" arrangement of places according to specific views of the world, as parodied by Jorge Luis Borges and others. Organizing knowledge has always been primarily an ontological activity; it is about putting the world into the classification. However, there is another side to this coin--the discovery side. While the hierarchical organization of the DDC establishes a default set of places and neighborhoods that is also visible in the physical manifestation of library shelves, this is just one set of relationships in the DDC. A KOS (Knowledge Organization System) becomes powerful by expressing those other relationships in a manner that not only collocates items in a physical place but in a knowledge space, and exposes those other relationships in ways beneficial and congenial to the unique perspective of an information seeker.
    What are those "other" relationships that Dewey possesses and that seem so important to surface? Firstly, there is the relationship of concepts to resources. Dewey has been used for a long time, and over 200,000 numbers are assigned to information resources each year and added to WorldCat by the Library of Congress and the German National Library alone. Secondly, we have relationships between concepts in the scheme itself. Dewey provides a rich set of non-hierarchical relations, indicating other relevant and related subjects across disciplinary boundaries. Thirdly, perhaps most importantly, there is the relationship between the same concepts across different languages. Dewey has been translated extensively, and current versions are available in French, German, Hebrew, Italian, Spanish, and Vietnamese. Briefer representations of the top-three levels (the DDC Summaries) are available in several languages in the DeweyBrowser. This multilingual nature of the scheme allows searchers to access a broader range of resources or to switch the language of--and thus localize--subject metadata seamlessly. MelvilClass, a Dewey front-end developed by the German National Library for the German translation, could be used as a common interface to the DDC in any language, as it is built upon the standard DDC data format. It is not hard to give an example of the basic terminology of a class pulled together in a multilingual way: <class/794.8> a skos:Concept ; skos:notation "794.8"^^ddc:notation ; skos:prefLabel "Computer games"@en ; skos:prefLabel "Computerspiele"@de ; skos:prefLabel "Jeux sur ordinateur"@fr ; skos:prefLabel "Juegos por computador"@es .
    Expressed in such manner, the Dewey number provides a language-independent representation of a Dewey concept, accompanied by language-dependent assertions about the concept. This information, identified by a URI, can be easily consumed by semantic web agents and used in various metadata scenarios. Fourthly, as we have seen, it is important to play well with others, i.e., establishing and maintaining relationships to other KOS and making the scheme available in different formats. As noted in the Dewey blog post "Tags and Dewey," since no single scheme is ever going to be the be-all, end-all solution for knowledge discovery, DDC concepts have been extensively mapped to other vocabularies and taxonomies, sometimes bridging them and acting as a backbone, sometimes using them as additional access vocabulary to be able to do more work "behind the scenes." To enable other applications and schemes to make use of those relationships, the full Dewey database is available in XML format; RDF-based formats and a web service are forthcoming. Pulling those relationships together under a common surface will be the next challenge going forward. In the semantic web community the concept of Linked Data (http://en.wikipedia.org/wiki/Linked_Data) currently receives some attention, with its emphasis on exposing and connecting data using technologies like URIs, HTTP and RDF to improve information discovery on the web. With its focus on relationships and discovery, it seems that Dewey will be well prepared to become part of this big linked data set. Now it is about putting the classification back into the world!"
  18. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.02
    0.01645577 = product of:
      0.06582308 = sum of:
        0.06582308 = sum of:
          0.029796746 = weight(_text_:classification in 2654) [ClassicSimilarity], result of:
            0.029796746 = score(doc=2654,freq=4.0), product of:
              0.14969917 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.04700564 = queryNorm
              0.19904417 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
          0.03602633 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.03602633 = score(doc=2654,freq=4.0), product of:
              0.16460574 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04700564 = queryNorm
              0.21886435 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
      0.25 = coord(1/4)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  19. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.02
    0.016235016 = product of:
      0.032470033 = sum of:
        0.023252133 = weight(_text_:subject in 553) [ClassicSimilarity], result of:
          0.023252133 = score(doc=553,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.13830662 = fieldWeight in 553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
        0.009217897 = product of:
          0.018435795 = sum of:
            0.018435795 = weight(_text_:classification in 553) [ClassicSimilarity], result of:
              0.018435795 = score(doc=553,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.12315229 = fieldWeight in 553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
    References [1] http:// www.theeuropeanlibrary.org [2] http://www.geheugenvannederland.nl [3] http://macs.cenl.org [4] Day, M., Koch, T., Neuroth, H.: Searching and browsing multiple subject gateways in the Renardus service. In Proceedings of the RC33 Sixth International Conference on Social Science Methodology, Amsterdam , 2005. [5] http://stitch.cs.vu.nl [6] http://mandragore.bnf.fr [7] http://www.iconclass.nl [8] www.w3.org/2004/02/skos/ 1 The Semantic Web vision supposes sharing data using different conceptualizations (ontologies), and therefore implies to tackle the semantic interoperability problem
  20. Luo, Y.; Picalausa, F.; Fletcher, G.H.L.; Hidders, J.; Vansummeren, S.: Storing and indexing massive RDF datasets (2012) 0.01
    0.011744101 = product of:
      0.046976402 = sum of:
        0.046976402 = weight(_text_:subject in 414) [ClassicSimilarity], result of:
          0.046976402 = score(doc=414,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.27942157 = fieldWeight in 414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=414)
      0.25 = coord(1/4)
    
    Abstract
    The resource description framework (RDF for short) provides a flexible method for modeling information on the Web [34,40]. All data items in RDF are uniformly represented as triples of the form (subject, predicate, object), sometimes also referred to as (subject, property, value) triples. As a running example for this chapter, a small fragment of an RDF dataset concerning music and music fans is given in Fig. 2.1. Spurred by efforts like the Linking Open Data project, increasingly large volumes of data are being published in RDF. Notable contributors in this respect include areas as diverse as the government, the life sciences, Web 2.0 communities, and so on. To give an idea of the volumes of RDF data concerned, as of September 2012, there are 31,634,213,770 triples in total published by data sources participating in the Linking Open Data project. Many individual data sources (like, e.g., PubMed, DBpedia, MusicBrainz) contain hundreds of millions of triples (797, 672, and 179 millions, respectively). These large volumes of RDF data motivate the need for scalable native RDF data management solutions capabable of efficiently storing, indexing, and querying RDF data. In this chapter, we present a general and up-to-date survey of the current state of the art in RDF storage and indexing.

Years

Languages

  • e 84
  • d 7
  • More… Less…

Types

  • a 66
  • el 19
  • m 10
  • s 4
  • n 2
  • x 1
  • More… Less…