Search (31 results, page 1 of 2)

  • × theme_ss:"Metadaten"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Bohne-Lang, A.: Semantische Metadaten für den Webauftritt einer Bibliothek (2016) 0.02
    0.01660582 = product of:
      0.07749383 = sum of:
        0.02465703 = weight(_text_:web in 3337) [ClassicSimilarity], result of:
          0.02465703 = score(doc=3337,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 3337, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3337)
        0.047791965 = weight(_text_:bibliothek in 3337) [ClassicSimilarity], result of:
          0.047791965 = score(doc=3337,freq=6.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.39283025 = fieldWeight in 3337, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3337)
        0.0050448296 = weight(_text_:information in 3337) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3337,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3337)
      0.21428572 = coord(3/14)
    
    Abstract
    Das Semantic Web ist schon seit über 10 Jahren viel beachtet und hat mit der Verfügbarkeit von Resource Description Framework (RDF) und den entsprechenden Ontologien einen großen Sprung in die Praxis gemacht. Vertreter kleiner Bibliotheken und Bibliothekare mit geringer Technik-Affinität stehen aber im Alltag vor großen Hürden, z.B. bei der Frage, wie man diese Technik konkret in den eigenen Webauftritt einbinden kann: man kommt sich vor wie Don Quijote, der versucht die Windmühlen zu bezwingen. RDF mit seinen Ontologien ist fast unverständlich komplex für Nicht-Informatiker und somit für den praktischen Einsatz auf Bibliotheksseiten in der Breite nicht direkt zu gebrauchen. Mit Schema.org wurde ursprünglich von den drei größten Suchmaschinen der Welt Google, Bing und Yahoo eine einfach und effektive semantische Beschreibung von Entitäten entwickelt. Aktuell wird Schema.org durch Google, Microsoft, Yahoo und Yandex weiter gesponsert und von vielen weiteren Suchmaschinen verstanden. Vor diesem Hintergrund hat die Bibliothek der Medizinischen Fakultät Mannheim auf ihrer Homepage (http://www.umm.uni-heidelberg.de/bibl/) verschiedene maschinenlesbare semantische Metadaten eingebettet. Sehr interessant und zukunftsweisend ist die neueste Entwicklung von Schema.org, bei der man eine 'Library' (https://schema.org/Library) mit Öffnungszeiten und vielem mehr modellieren kann. Ferner haben wir noch semantische Metadaten im Open Graph- und Dublin Core-Format eingebettet, um alte Standards und Facebook-konforme Informationen maschinenlesbar zur Verfügung zu stellen.
    Source
    GMS Medizin - Bibliothek - Information. 16(2016) Nr.3, 11 S. [http://www.egms.de/static/pdf/journals/mbi/2017-16/mbi000372.pdf]
    Theme
    Semantic Web
  2. Baker, T.: ¬A grammar of Dublin Core (2000) 0.01
    0.014491113 = product of:
      0.050718892 = sum of:
        0.025709987 = weight(_text_:wide in 1236) [ClassicSimilarity], result of:
          0.025709987 = score(doc=1236,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.013948122 = weight(_text_:web in 1236) [ClassicSimilarity], result of:
          0.013948122 = score(doc=1236,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.005707573 = weight(_text_:information in 1236) [ClassicSimilarity], result of:
          0.005707573 = score(doc=1236,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.10971737 = fieldWeight in 1236, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.0053532133 = product of:
          0.016059639 = sum of:
            0.016059639 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.016059639 = score(doc=1236,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  3. Duval, E.; Hodgins, W.; Sutton, S.; Weibel, S.L.: Metadata principles and practicalities (2002) 0.01
    0.009996092 = product of:
      0.046648428 = sum of:
        0.025709987 = weight(_text_:wide in 1208) [ClassicSimilarity], result of:
          0.025709987 = score(doc=1208,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 1208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1208)
        0.013948122 = weight(_text_:web in 1208) [ClassicSimilarity], result of:
          0.013948122 = score(doc=1208,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 1208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1208)
        0.0069903214 = weight(_text_:information in 1208) [ClassicSimilarity], result of:
          0.0069903214 = score(doc=1208,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1343758 = fieldWeight in 1208, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1208)
      0.21428572 = coord(3/14)
    
    Abstract
    For those of us still struggling with basic concepts regarding metadata in this brave new world in which cataloging means much more than MARC, an article like this is welcome indeed. In this 30.000-foot overview of the metadata landscape, broad issues such as modularity, namespaces, extensibility, refinement, and multilingualism are discussed. In addition, "practicalities" like application profiles, syntax and semantics, metadata registries, and automated generation of metadata are explained. Although this piece is not exhaustive of high-level metadata issues, it is nonetheless a useful description of some of the most important issues surrounding metadata creation and use. The rapid changes in the means of information access occasioned by the emergence of the World Wide Web have spawned an upheaval in the means of describing and managing information resources. Metadata is a primary tool in this work, and an important link in the value chain of knowledge economies. Yet there is much confusion about how metadata should be integrated into information systems. How is it to be created or extended? Who will manage it? How can it be used and exchanged? Whence comes its authority? Can different metadata standards be used together in a given environment? These and related questions motivate this paper. The authors hope to make explicit the strong foundations of agreement shared by two prominent metadata Initiatives: the Dublin Core Metadata Initiative (DCMI) and the Institute for Electrical and Electronics Engineers (IEEE) Learning Object Metadata (LOM) Working Group. This agreement emerged from a joint metadata taskforce meeting in Ottawa in August, 2001. By elucidating shared principles and practicalities of metadata, we hope to raise the level of understanding among our respective (and shared) constituents, so that all stakeholders can move forward more decisively to address their respective problems. The ideas in this paper are divided into two categories. Principles are those concepts judged to be common to all domains of metadata and which might inform the design of any metadata schema or application. Practicalities are the rules of thumb, constraints, and infrastructure issues that emerge from bringing theory into practice in the form of useful and sustainable systems.
  4. Baker, T.; Dekkers, M.: Identifying metadata elements with URIs : The CORES resolution (2003) 0.01
    0.009362994 = product of:
      0.04369397 = sum of:
        0.025709987 = weight(_text_:wide in 1199) [ClassicSimilarity], result of:
          0.025709987 = score(doc=1199,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 1199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1199)
        0.013948122 = weight(_text_:web in 1199) [ClassicSimilarity], result of:
          0.013948122 = score(doc=1199,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 1199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1199)
        0.0040358636 = weight(_text_:information in 1199) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=1199,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 1199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1199)
      0.21428572 = coord(3/14)
    
    Abstract
    On 18 November 2002, at a meeting organised by the CORES Project (Information Society Technologies Programme, European Union), several organisations regarded as maintenance authorities for metadata elements achieved consensus on a resolution to assign Uniform Resource Identifiers (URIs) to metadata elements as a useful first step towards the development of mapping infrastructures and interoperability services. The signatories of the CORES Resolution agreed to promote this consensus in their communities and beyond and to implement an action plan in the following six months. Six months having passed, the maintainers of GILS, ONIX, MARC 21, CERIF, DOI, IEEE/LOM, and Dublin Core report on their implementations of the resolution and highlight issues of relevance to establishing good-practice conventions for declaring, identifying, and maintaining metadata elements more generally. In June 2003, the resolution was also endorsed by the maintainers of UNIMARC. The "Resolution on Metadata Element Identifiers", or CORES Resolution, is an agreement among the maintenance organisations for several major metadata standards - GILS, ONIX, MARC 21, UNIMARC, CERIF, DOI®, IEEE/LOM, and Dublin Core - to identify their metadata elements using Uniform Resource Identifiers (URIs). The Uniform Resource Identifier, defined in the IETF RFC 2396 as "a compact string of characters for identifying an abstract or physical resource", has been promoted for use as a universal form of identification by the World Wide Web Consortium. The CORES Resolution, formulated at a meeting organised by the European project CORES in November 2002, included a commitment to publicise the consensus statement to a wider audience of metadata standards initiatives and to implement key points of the agreement within the following six months - specifically, to define URI assignment mechanisms, assign URIs to elements, and formulate policies for the persistence of those URIs. This article marks the passage of six months by reporting on progress made in implementing this common action plan. After presenting the text of the CORES Resolution and its three "clarifications", the article summarises the position of each signatory organisation towards assigning URIs to its metadata elements, noting any practical or strategic problems that may have emerged. These progress reports were based on input from Thomas Baker, José Borbinha, Eliot Christian, Erik Duval, Keith Jeffery, Rebecca Guenther, and Norman Paskin. The article closes with a few general observations about these first steps towards the clarification of shared conventions for the identification of metadata elements and perhaps, one can hope, towards the ultimate goal of improving interoperability among a diversity of metadata communities.
  5. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.01
    0.006678987 = product of:
      0.046752907 = sum of:
        0.034519844 = weight(_text_:web in 5896) [ClassicSimilarity], result of:
          0.034519844 = score(doc=5896,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35694647 = fieldWeight in 5896, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
        0.012233062 = weight(_text_:information in 5896) [ClassicSimilarity], result of:
          0.012233062 = score(doc=5896,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 5896, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
    Source
    Journal of digital information. 1(2001) no.8
  6. Greenberg, J.; Pattuelli, M.; Parsia, B.; Robertson, W.: Author-generated Dublin Core Metadata for Web Resources : A Baseline Study in an Organization (2002) 0.01
    0.006422852 = product of:
      0.044959962 = sum of:
        0.034870304 = weight(_text_:web in 1281) [ClassicSimilarity], result of:
          0.034870304 = score(doc=1281,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.36057037 = fieldWeight in 1281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1281)
        0.010089659 = weight(_text_:information in 1281) [ClassicSimilarity], result of:
          0.010089659 = score(doc=1281,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 1281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1281)
      0.14285715 = coord(2/14)
    
    Source
    Journal of digital information. 2(2002) no.2,
  7. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.01
    0.0062901457 = product of:
      0.044031017 = sum of:
        0.038986187 = weight(_text_:web in 3895) [ClassicSimilarity], result of:
          0.038986187 = score(doc=3895,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.40312994 = fieldWeight in 3895, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
        0.0050448296 = weight(_text_:information in 3895) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3895,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.14285715 = coord(2/14)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  8. Heery, R.; Wagner, H.: ¬A metadata registry for the Semantic Web (2002) 0.01
    0.0059439926 = product of:
      0.041607946 = sum of:
        0.03661382 = weight(_text_:web in 1210) [ClassicSimilarity], result of:
          0.03661382 = score(doc=1210,freq=18.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37859887 = fieldWeight in 1210, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1210)
        0.0049941265 = weight(_text_:information in 1210) [ClassicSimilarity], result of:
          0.0049941265 = score(doc=1210,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0960027 = fieldWeight in 1210, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1210)
      0.14285715 = coord(2/14)
    
    Abstract
    The Semantic Web activity is a W3C project whose goal is to enable a 'cooperative' Web where machines and humans can exchange electronic content that has clear-cut, unambiguous meaning. This vision is based on the automated sharing of metadata terms across Web applications. The declaration of schemas in metadata registries advance this vision by providing a common approach for the discovery, understanding, and exchange of semantics. However, many of the issues regarding registries are not clear, and ideas vary regarding their scope and purpose. Additionally, registry issues are often difficult to describe and comprehend without a working example. This article will explore the role of metadata registries and will describe three prototypes, written by the Dublin Core Metadata Initiative. The article will outline how the prototypes are being used to demonstrate and evaluate application scope, functional requirements, and technology solutions for metadata registries. Metadata schema registries are, in effect, databases of schemas that can trace an historical line back to shared data dictionaries and the registration process encouraged by the ISO/IEC 11179 community. New impetus for the development of registries has come with the development activities surrounding creation of the Semantic Web. The motivation for establishing registries arises from domain and standardization communities, and from the knowledge management community. Examples of current registry activity include:
    * Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179 (This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.); * The xml.org directory of the Extended Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); * The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; * The Semantic Web Agreement Group Dictionary, a database of terms for the Semantic Web that can be referred to by humans and software agents; * LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; * The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives. Metadata registries essentially provide an index of terms. Given the distributed nature of the Web, there are a number of ways this can be accomplished. For example, the registry could link to terms and definitions in schemas published by implementers and stored locally by the schema maintainer. Alternatively, the registry might harvest various metadata schemas from their maintainers. Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. An important benefit of this approach is an increase in the reuse of existing terms, rather than users having to reinvent them. Merging schemas to one view leads to harmonization between applications and helps avoid duplication of effort. Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process.
    Theme
    Semantic Web
  9. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.00
    0.0049139243 = product of:
      0.034397468 = sum of:
        0.024409214 = weight(_text_:web in 1155) [ClassicSimilarity], result of:
          0.024409214 = score(doc=1155,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 1155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
        0.009988253 = weight(_text_:information in 1155) [ClassicSimilarity], result of:
          0.009988253 = score(doc=1155,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1920054 = fieldWeight in 1155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
  10. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.00
    0.004603468 = product of:
      0.032224275 = sum of:
        0.02465703 = weight(_text_:web in 1216) [ClassicSimilarity], result of:
          0.02465703 = score(doc=1216,freq=16.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 1216, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1216)
        0.0075672437 = weight(_text_:information in 1216) [ClassicSimilarity], result of:
          0.0075672437 = score(doc=1216,freq=18.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.14546604 = fieldWeight in 1216, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1216)
      0.14285715 = coord(2/14)
    
    Abstract
    Reality is messy. Individuals perceive or define objects differently. Objects may change over time, morphing into new versions of their former selves or into things altogether different. A book can give rise to a translation, derivation, or edition, and these resulting objects are related in complex ways to each other and to the people and contexts in which they were created or transformed. Providing a normalized view of such a messy reality is a precondition for managing information. From the first library catalogs, through Melvil Dewey's Decimal Classification system in the nineteenth century, to today's MARC encoding of AACR2 cataloging rules, libraries have epitomized the process of what David Levy calls "order making", whereby catalogers impose a veneer of regularity on the natural disorder of the artifacts they encounter. The pre-digital library within which the Catalog and its standards evolved was relatively self-contained and controlled. Creating and maintaining catalog records was, and still is, the task of professionals. Today's Web, in contrast, has brought together a diversity of information management communities, with a variety of order-making standards, into what Stuart Weibel has called the Internet Commons. The sheer scale of this context has motivated a search for new ways to describe and index information. Second-generation search engines such as Google can yield astonishingly good search results, while tools such as ResearchIndex for automatic citation indexing and techniques for inferring "Web communities" from constellations of hyperlinks promise even better methods for focusing queries on information from authoritative sources. Such "automated digital libraries," according to Bill Arms, promise to radically reduce the cost of managing information. Alongside the development of such automated methods, there is increasing interest in metadata as a means of imposing pre-defined order on Web content. While the size and changeability of the Web makes professional cataloging impractical, a minimal amount of information ordering, such as that represented by the Dublin Core (DC), may vastly improve the quality of an automatic index at low cost; indeed, recent work suggests that some types of simple description may be generated with little or no human intervention.
    Metadata is not monolithic. Instead, it is helpful to think of metadata as multiple views that can be projected from a single information object. Such views can form the basis of customized information services, such as search engines. Multiple views -- different types of metadata associated with a Web resource -- can facilitate a "drill-down" search paradigm, whereby people start their searches at a high level and later narrow their focus using domain-specific search categories. In Figure 1, for example, Mona Lisa may be viewed from the perspective of non-specialized searchers, with categories that are valid across domains (who painted it and when?); in the context of a museum (when and how was it acquired?); in the geo-spatial context of a walking tour using mobile devices (where is it in the gallery?); and in a legal framework (who owns the rights to its reproduction?). Multiple descriptive views imply a modular approach to metadata. Modularity is the basis of metadata architectures such as the Resource Description Framework (RDF), which permit different communities of expertise to associate and maintain multiple metadata packages for Web resources. As noted elsewhere, static association of multiple metadata packages with resources is but one way of achieving modularity. Another method is to computationally derive order-making views customized to the current needs of a client. This paper examines the evolution and scope of the Dublin Core from this perspective of metadata modularization. Dublin Core began in 1995 with a specific goal and scope -- as an easy-to-create and maintain descriptive format to facilitate cross-domain resource discovery on the Web. Over the years, this goal of "simple metadata for coarse-granularity discovery" came to mix with another goal -- that of community and domain-specific resource description and its attendant complexity. A notion of "qualified Dublin Core" evolved whereby the model for simple resource discovery -- a set of simple metadata elements in a flat, document-centric model -- would form the basis of more complex descriptions by treating the values of its elements as entities with properties ("component elements") in their own right.
    At the time of writing, the Dublin Core Metadata Initiative (DCMI) has clarified its commitment to the simple approach. The qualification principles announced in early 2000 support the use of DC elements as the basis for simple statements about resources, rather than as the foundation for more descriptive clauses. This paper takes a critical look at some of the issues that led up to this renewed commitment to simplicity. We argue that: * There remains a compelling need for simple, "pidgin" metadata. From a technical and economic perspective, document-centric metadata, where simple string values are associated with a finite set of properties, is most appropriate for generic, cross-domain discovery queries in the Internet Commons. Such metadata is not necessarily fixed in physical records, but may be projected algorithmically from more complex metadata or from content itself. * The Dublin Core, while far from perfect from an engineering perspective, is an acceptable standard for such simple metadata. Agreements in the global information space are as much social as technical, and the process by which the Dublin Core has been developed, involving a broad cross-section of international participants, is a model for such "socially developed" standards. * Efforts to introduce complexity into Dublin Core are misguided. Complex descriptions may be necessary for some Web resources and for some purposes, such as administration, preservation, and reference linking. However, complex descriptions require more expressive data models that differentiate between agents, documents, contexts, events, and the like. An attempt to intermix simplicity and complexity, and the data models most appropriate for them, defeats the equally noble goals of cross-domain description and extensive resource description. * The principle of modularity suggests that metadata formats tailored for simplicity be used alongside others tailored for complexity.
  11. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.00
    0.004478364 = product of:
      0.031348545 = sum of:
        0.02465703 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
          0.02465703 = score(doc=4550,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.020074548 = score(doc=4550,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  12. Baker, T.: Languages for Dublin Core (1998) 0.00
    0.0022479983 = product of:
      0.015735988 = sum of:
        0.012204607 = weight(_text_:web in 1257) [ClassicSimilarity], result of:
          0.012204607 = score(doc=1257,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.12619963 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
        0.0035313808 = weight(_text_:information in 1257) [ClassicSimilarity], result of:
          0.0035313808 = score(doc=1257,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.06788416 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
      0.14285715 = coord(2/14)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
  13. Söhler, M.: Schluss mit Schema F (2011) 0.00
    0.0022277823 = product of:
      0.03118895 = sum of:
        0.03118895 = weight(_text_:web in 4439) [ClassicSimilarity], result of:
          0.03118895 = score(doc=4439,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.32250395 = fieldWeight in 4439, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4439)
      0.071428575 = coord(1/14)
    
    Abstract
    Mit Schema.org und dem semantischen Web sollen Suchmaschinen verstehen lernen
    Content
    "Wörter haben oft mehrere Bedeutungen. Einige kennen den "Kanal" als künstliche Wasserstraße, andere vom Fernsehen. Die Waage kann zum Erfassen des Gewichts nützlich sein oder zur Orientierung auf der Horoskopseite. Casablanca ist eine Stadt und ein Film zugleich. Wo Menschen mit der Zeit Bedeutungen unterscheiden und verarbeiten lernen, können dies Suchmaschinen von selbst nicht. Stets listen sie dumpf hintereinander weg alles auf, was sie zu einem Thema finden. Damit das nicht so bleibt, haben sich nun Google, Yahoo und die zu Microsoft gehörende Suchmaschine Bing zusammengetan, um der Suche im Netz mehr Verständnis zu verpassen. Man spricht dabei auch von einer "semantischen Suche". Das Ergebnis heißt Schema.org. Wer die Webseite einmal besucht, sich ein wenig in die Unterstrukturen hereinklickt und weder Vorkenntnisse im Programmieren noch im Bereich des semantischen Webs hat, wird sich überfordert und gelangweilt wieder abwenden. Doch was hier entstehen könnte, hat das Zeug dazu, Teile des Netzes und speziell die Funktionen von Suchmaschinen mittel- oder langfristig zu verändern. "Große Player sind dabei, sich auf Standards zu einigen", sagt Daniel Bahls, Spezialist für Semantische Technologien beim ZBW Leibniz-Informationszentrum Wirtschaft in Hamburg. "Die semantischen Technologien stehen schon seit Jahren im Raum und wurden bisher nur im kleineren Kontext verwendet." Denn Schema.org lädt Entwickler, Forscher, die Semantic-Web-Community und am Ende auch alle Betreiber von Websites dazu ein, an der Umgestaltung der Suche im Netz mitzuwirken. Inhalte von Websites sollen mit einem speziellen, aber einheitlichen Vokabular für die Crawler - die Analyseprogramme der Suchmaschinen - gekennzeichnet und aufbereitet werden.
    Indem Schlagworte, sogenannte Tags, in den für Normal-User nicht sichtbaren Teil des Codes von Websites eingebettet werden, sind Suchmachinen nicht mehr so sehr auf die Analyse der natürlichen Sprache angewiesen, um Texte inhaltlich zu erfassen. Im Blog ZBW Mediatalk wird dies als "Semantic Web light" bezeichnet - ein semantisches Web auf niedrigster Ebene. Aber selbst das werde "schon viel bewirken", meint Bahls. "Das semantische Web wird sich über die nächsten Jahrzehnte evolutionär weiterentwickeln." Einen "Abschluss" werde es nie geben, "da eine einheitliche Formalisierung von Begrifflichkeiten auf feiner Stufe kaum möglich ist". Die Ergebnisse aus Schema.org würden "zeitnah" in die Suchmaschine integriert, "denn einen Zeitplan" gebe es nicht, so Stefan Keuchel, Pressesprecher von Google Deutschland. Bis das so weit ist, hilft der Verweis von Daniel Bahns auf die bereits existierende semantische Suchmaschine Sig.ma. Geschwindigkeit und Menge der Ergebnisse nach einer Suchanfrage spielen hier keine Rolle. Sig.ma sammelt seine Informationen allein im Bereich des semantischen Webs und listet nach einer Anfrage alles Bekannte strukturiert auf.
  14. Wallis, R.; Isaac, A.; Charles, V.; Manguinhas, H.: Recommendations for the application of Schema.org to aggregated cultural heritage metadata to increase relevance and visibility to search engines : the case of Europeana (2017) 0.00
    0.0017612164 = product of:
      0.02465703 = sum of:
        0.02465703 = weight(_text_:web in 3372) [ClassicSimilarity], result of:
          0.02465703 = score(doc=3372,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 3372, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3372)
      0.071428575 = coord(1/14)
    
    Abstract
    Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.
  15. Söhler, M.: "Dumm wie Google" war gestern : semantische Suche im Netz (2011) 0.00
    0.0017435154 = product of:
      0.024409214 = sum of:
        0.024409214 = weight(_text_:web in 4440) [ClassicSimilarity], result of:
          0.024409214 = score(doc=4440,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 4440, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4440)
      0.071428575 = coord(1/14)
    
    Content
    - Neue Standards Doch was hier entstehen könnte, hat das Zeug dazu, Teile des Netzes und speziell die Funktionen von Suchmaschinen mittel- oder langfristig zu verändern. "Große Player sind dabei, sich auf Standards zu einigen", sagt Daniel Bahls, Spezialist für Semantische Technologien beim ZBW Leibniz-Informationszentrum Wirtschaft in Hamburg. "Die semantischen Technologien stehen schon seit Jahren im Raum und wurden bisher nur im kleineren Kontext verwendet." Denn Schema.org lädt Entwickler, Forscher, die Semantic-Web-Community und am Ende auch alle Betreiber von Websites dazu ein, an der Umgestaltung der Suche im Netz mitzuwirken. "Damit wollen Google, Bing und Yahoo! dem Info-Chaos im WWW den Garaus machen", schreibt André Vatter im Blog ZBW Mediatalk. Inhalte von Websites sollen mit einem speziellen, aber einheitlichen Vokabular für die Crawler der Suchmaschinen gekennzeichnet und aufbereitet werden. Indem Schlagworte, so genannte Tags, in den Code von Websites eingebettet werden, sind Suchmachinen nicht mehr so sehr auf die Analyse der natürlichen Sprache angewiesen, um Texte inhaltlich zu erfassen. Im Blog wird dies als "Semantic Web light" bezeichnet - ein semantisches Web auf niedrigster Ebene. Aber selbst das werde "schon viel bewirken", meint Bahls. "Das semantische Web wird sich über die nächsten Jahrzehnte evolutionär weiterentwickeln." Einen "Abschluss" werde es nie geben, "da eine einheitliche Formalisierung von Begrifflichkeiten auf feiner Stufe kaum möglich ist."
  16. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.00
    0.0014944416 = product of:
      0.020922182 = sum of:
        0.020922182 = weight(_text_:web in 3523) [ClassicSimilarity], result of:
          0.020922182 = score(doc=3523,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 3523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
      0.071428575 = coord(1/14)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
  17. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.00
    0.0014944416 = product of:
      0.020922182 = sum of:
        0.020922182 = weight(_text_:web in 3896) [ClassicSimilarity], result of:
          0.020922182 = score(doc=3896,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
      0.071428575 = coord(1/14)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  18. Weibel, S.L.; Koch, T.: ¬The Dublin Core Metatdata Initiative : mission, current activities, and future directions (2000) 0.00
    0.001245368 = product of:
      0.017435152 = sum of:
        0.017435152 = weight(_text_:web in 1237) [ClassicSimilarity], result of:
          0.017435152 = score(doc=1237,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 1237, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1237)
      0.071428575 = coord(1/14)
    
    Abstract
    Metadata is a keystone component for a broad spectrum of applications that are emerging on the Web to help stitch together content and services and make them more visible to users. The Dublin Core Metadata Initiative (DCMI) has led the development of structured metadata to support resource discovery. This international community has, over a period of 6 years and 8 workshops, brought forth: * A core standard that enhances cross-disciplinary discovery and has been translated into 25 languages to date; * A conceptual framework that supports the modular development of auxiliary metadata components; * An open consensus building process that has brought to fruition Australian, European and North American standards with promise as a global standard for resource discovery; * An open community of hundreds of practitioners and theorists who have found a common ground of principles, procedures, core semantics, and a framework to support interoperable metadata. The 8th Dublin Core Metadata Workshop capped an active year of progress that included standardization of the 15-element core foundation and approval of an initial array of Dublin Core Qualifiers. While there is important work to be done to promote stability and increased adoption of the Dublin Core, the time has come to look beyond the core elements towards a broader metadata agenda. This report describes the new mission statement of the Dublin Core Metadata Initiative (DCMI) that supports the agenda, recapitulates the important milestones of the year 2000, outlines activities of the 8th DCMI workshop in Ottawa, and summarizes the 2001 workplan.
  19. Kunze, J.: ¬A Metadata Kernel for Electronic Permanence (2002) 0.00
    8.64828E-4 = product of:
      0.012107591 = sum of:
        0.012107591 = weight(_text_:information in 1107) [ClassicSimilarity], result of:
          0.012107591 = score(doc=1107,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1107)
      0.071428575 = coord(1/14)
    
    Source
    Journal of digital information. 2(2002) no.2,
  20. Lagoze, C.; Hunter, J.: ¬The ABC Ontology and Model (2002) 0.00
    8.64828E-4 = product of:
      0.012107591 = sum of:
        0.012107591 = weight(_text_:information in 1282) [ClassicSimilarity], result of:
          0.012107591 = score(doc=1282,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 1282, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1282)
      0.071428575 = coord(1/14)
    
    Source
    Journal of digital information. 2(2002) no.2,