Search (206 results, page 1 of 11)

  • × theme_ss:"Metadaten"
  1. Brugger, J.M.: Cataloging for digital libraries (1996) 0.04
    0.04451662 = product of:
      0.11129155 = sum of:
        0.05464649 = weight(_text_:it in 3689) [ClassicSimilarity], result of:
          0.05464649 = score(doc=3689,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.36153275 = fieldWeight in 3689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=3689)
        0.05664506 = weight(_text_:22 in 3689) [ClassicSimilarity], result of:
          0.05664506 = score(doc=3689,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.30952093 = fieldWeight in 3689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=3689)
      0.4 = coord(2/5)
    
    Abstract
    Using grant funding, some prominent creators of digital libraries have promised users of networked resources certain kinds of access. Some of this access finds a ready-made vehicle in USMARC, some of it in the TEI header, some of it has yet to find the most appropriate vehicle. In its quest to provide access to what users need, the cataloging community can show leadership by exploring the strength inherent in a metadata-providing system like the TEI header.
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.59-73
  2. McCallum, S.H.: ¬An introduction to the Metadata Object Description Schema (MODS) (2004) 0.04
    0.04451662 = product of:
      0.11129155 = sum of:
        0.05464649 = weight(_text_:it in 81) [ClassicSimilarity], result of:
          0.05464649 = score(doc=81,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.36153275 = fieldWeight in 81, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=81)
        0.05664506 = weight(_text_:22 in 81) [ClassicSimilarity], result of:
          0.05664506 = score(doc=81,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.30952093 = fieldWeight in 81, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=81)
      0.4 = coord(2/5)
    
    Abstract
    This paper provides an introduction to the Metadata Object Description Schema (MODS), a MARC21 compatible XML schema for descriptive metadata. It explains the requirements that the schema targets and the special features that differentiate it from MARC, such as user-oriented tags, regrouped data elements, linking, recursion, and accommodations for electronic resources.
    Source
    Library hi tech. 22(2004) no.1, S.82-88
  3. Wusteman, J.: Whither HTML? (2004) 0.04
    0.04451662 = product of:
      0.11129155 = sum of:
        0.05464649 = weight(_text_:it in 1001) [ClassicSimilarity], result of:
          0.05464649 = score(doc=1001,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.36153275 = fieldWeight in 1001, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=1001)
        0.05664506 = weight(_text_:22 in 1001) [ClassicSimilarity], result of:
          0.05664506 = score(doc=1001,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.30952093 = fieldWeight in 1001, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=1001)
      0.4 = coord(2/5)
    
    Abstract
    HTML has reinvented itself as an XML application. The working draft of the latest version, XHTML 2.0, is causing controversy due to its lack of backward compatibility and the deprecation - and in some cases disappearance - of some popular tags. But is this commotion distracting us from the big picture of what XHTML has to offer? Where is HTML going? And is it taking the Web community with it?
    Source
    Library hi tech. 22(2004) no.1, S.99-105
  4. Marchiori, M.: ¬The limits of Web metadata, and beyond (1998) 0.04
    0.038952045 = product of:
      0.09738011 = sum of:
        0.04781568 = weight(_text_:it in 3383) [ClassicSimilarity], result of:
          0.04781568 = score(doc=3383,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.31634116 = fieldWeight in 3383, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3383)
        0.04956443 = weight(_text_:22 in 3383) [ClassicSimilarity], result of:
          0.04956443 = score(doc=3383,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 3383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3383)
      0.4 = coord(2/5)
    
    Abstract
    Highlights 2 major problems of the WWW metadata: it will take some time before a reasonable number of people start using metadata to provide a better Web classification, and that no one can guarantee that a majority of the Web objects will be ever properly classified via metadata. Addresses the problem of how to cope with intrinsic limits of Web metadata, proposes a method to solve these problems and show evidence of its effectiveness. Examines the important problem of what is the required critical mass in the WWW for metadata in order for it to be really useful
    Date
    1. 8.1996 22:08:06
  5. Proffitt, M.: Pulling it all together : use of METS in RLG cultural materials service (2004) 0.04
    0.038114388 = product of:
      0.09528597 = sum of:
        0.038640905 = weight(_text_:it in 767) [ClassicSimilarity], result of:
          0.038640905 = score(doc=767,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.25564227 = fieldWeight in 767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=767)
        0.05664506 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
          0.05664506 = score(doc=767,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.30952093 = fieldWeight in 767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=767)
      0.4 = coord(2/5)
    
    Source
    Library hi tech. 22(2004) no.1, S.65-68
  6. Cundiff, M.V.: ¬An introduction to the Metadata Encoding and Transmission Standard (METS) (2004) 0.04
    0.038114388 = product of:
      0.09528597 = sum of:
        0.038640905 = weight(_text_:it in 2834) [ClassicSimilarity], result of:
          0.038640905 = score(doc=2834,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.25564227 = fieldWeight in 2834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=2834)
        0.05664506 = weight(_text_:22 in 2834) [ClassicSimilarity], result of:
          0.05664506 = score(doc=2834,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.30952093 = fieldWeight in 2834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2834)
      0.4 = coord(2/5)
    
    Abstract
    This article provides an introductory overview of the Metadata Encoding and Transmission Standard, better known as METS. It will be of most use to librarians and technical staff who are encountering METS for the first time. The article contains a brief history of the development of METS, a primer covering the basic structure and content of METS documents, and a discussion of several issues relevant to the implementation and continuing development of METS including object models, extension schemata, and application profiles.
    Source
    Library hi tech. 22(2004) no.1, S.52-64
  7. Guenther, R.S.: Using the Metadata Object Description Schema (MODS) for resource description : guidelines and applications (2004) 0.03
    0.033350088 = product of:
      0.083375216 = sum of:
        0.03381079 = weight(_text_:it in 2837) [ClassicSimilarity], result of:
          0.03381079 = score(doc=2837,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 2837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2837)
        0.04956443 = weight(_text_:22 in 2837) [ClassicSimilarity], result of:
          0.04956443 = score(doc=2837,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 2837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2837)
      0.4 = coord(2/5)
    
    Abstract
    This paper describes the Metadata Object Description Schema (MODS), its accompanying documentation and some of its applications. It reviews the MODS user guidelines provided by the Library of Congress and how they enable a user of the schema to consistently apply MODS as a metadata scheme. Because the schema itself could not fully document appropriate usage, the guidelines provide element definitions, history, relationships with other elements, usage conventions, and examples. Short descriptions of some MODS applications are given and a more detailed discussion of its use in the Library of Congress's Minerva project for Web archiving is given.
    Source
    Library hi tech. 22(2004) no.1, S.89-98
  8. Lubas, R.L.; Wolfe, R.H.W.; Fleischman, M.: Creating metadata practices for MIT's OpenCourseWare Project (2004) 0.03
    0.033350088 = product of:
      0.083375216 = sum of:
        0.03381079 = weight(_text_:it in 2843) [ClassicSimilarity], result of:
          0.03381079 = score(doc=2843,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 2843, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2843)
        0.04956443 = weight(_text_:22 in 2843) [ClassicSimilarity], result of:
          0.04956443 = score(doc=2843,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 2843, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2843)
      0.4 = coord(2/5)
    
    Abstract
    The MIT libraries were called upon to recommend a metadata scheme for the resources contained in MIT's OpenCourseWare (OCW) project. The resources in OCW needed descriptive, structural, and technical metadata. The SCORM standard, which uses IEEE Learning Object Metadata for its descriptive standard, was selected for its focus on educational objects. However, it was clear that the Libraries would need to recommend how the standard would be applied and adapted to accommodate needs that were not addressed in the standard's specifications. The newly formed MIT Libraries Metadata Unit adapted established practices from AACR2 and MARC traditions when facing situations in which there were no precedents to follow.
    Source
    Library hi tech. 22(2004) no.2, S.138-143
  9. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.03
    0.033350088 = product of:
      0.083375216 = sum of:
        0.03381079 = weight(_text_:it in 2848) [ClassicSimilarity], result of:
          0.03381079 = score(doc=2848,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 2848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2848)
        0.04956443 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
          0.04956443 = score(doc=2848,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 2848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2848)
      0.4 = coord(2/5)
    
    Abstract
    This article proposes a schema for meta-information about MARC that can express at a fairly comprehensive level the syntactic and semantic aspects of MARC formats in XML, including not only rules but also all texts and examples that are conveyed by MARC documentation. It can be thought of as an XML version of the MARC or UNIMARC manuals, for both machine and human usage. The article explains how such a schema can be the central piece of a more complete framework, to be used in conjunction with "slim" record formats, providing a rich environment for the automated processing of bibliographic data.
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  10. Warner, S.: E-prints and the Open Archives Initiative (2003) 0.03
    0.033350088 = product of:
      0.083375216 = sum of:
        0.03381079 = weight(_text_:it in 4772) [ClassicSimilarity], result of:
          0.03381079 = score(doc=4772,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 4772, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4772)
        0.04956443 = weight(_text_:22 in 4772) [ClassicSimilarity], result of:
          0.04956443 = score(doc=4772,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 4772, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4772)
      0.4 = coord(2/5)
    
    Abstract
    The Open Archives Initiative (OAI) was created as a practical way to promote interoperability between e-print repositories. Although the scope of the OAI has been broadened, e-print repositories still represent a significant fraction of OAI data providers. This article presents a brief survey of OAI e-print repositories, and of services using metadata harvested from e-print repositories using the OAI protocol for metadata harvesting (OAI-PMH). It then discusses several situations where metadata harvesting may be used to further improve the utility of e-print archives as a component of the scholarly communication infrastructure.
    Date
    18.12.2005 13:18:22
  11. Pfister, E.; Wittwer, B.; Wolff, M.: Metadaten - Manuelle Datenpflege vs. Automatisieren : ein Praxisbericht zu Metadatenmanagement an der ETH-Bibliothek (2017) 0.03
    0.033350088 = product of:
      0.083375216 = sum of:
        0.03381079 = weight(_text_:it in 5630) [ClassicSimilarity], result of:
          0.03381079 = score(doc=5630,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 5630, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5630)
        0.04956443 = weight(_text_:22 in 5630) [ClassicSimilarity], result of:
          0.04956443 = score(doc=5630,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 5630, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5630)
      0.4 = coord(2/5)
    
    Abstract
    Neue Entwicklungen im Bibliothekswesen und in der Technologie führen zu neuen ufgaben, die spezialisierte Fachkräfte erfordern. Die ETH-Bibliothek reagierte darauf, indem sie das Pilotprojekt "Metadatenmanagement" startete, welches sich mit Datenanalysen, -einspielungen und -mutationen, Datenmappings, der Erstellung eines Datenflussdiagramms, sowie mit der Einführung von RDA und GND beschäftigte. Nach zwei Jahren zeigte sich, dass zahlreiche Aufgabengebiete existieren, welche von Metadatenspezialisten, als Schnittstelle zwischen den Fachabteilungen und der IT, übernommen werden können. Dieser Bericht fasst die getätigten Arbeiten, Erfahrungen und Erkenntnisse der zweijährigen Pilotphase zusammen.
    Source
    B.I.T.online. 20(2017) H.1, S.22-25
  12. Baker, T.: ¬A grammar of Dublin Core (2000) 0.03
    0.03177586 = product of:
      0.07943964 = sum of:
        0.05111711 = weight(_text_:it in 1236) [ClassicSimilarity], result of:
          0.05111711 = score(doc=1236,freq=14.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.33818293 = fieldWeight in 1236, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.02832253 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
          0.02832253 = score(doc=1236,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.15476047 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
      0.4 = coord(2/5)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  13. Catarino, M.E.; Baptista, A.A.: Relating folksonomies with Dublin Core (2008) 0.03
    0.02968728 = product of:
      0.0742182 = sum of:
        0.024150565 = weight(_text_:it in 2652) [ClassicSimilarity], result of:
          0.024150565 = score(doc=2652,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 2652, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
        0.050067633 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
          0.050067633 = score(doc=2652,freq=4.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.27358043 = fieldWeight in 2652, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
      0.4 = coord(2/5)
    
    Abstract
    Folksonomy is the result of describing Web resources with tags created by Web users. Although it has become a popular application for the description of resources, in general terms Folksonomies are not being conveniently integrated in metadata. However, if the appropriate metadata elements are identified, then further work may be conducted to automatically assign tags to these elements (RDF properties) and use them in Semantic Web applications. This article presents research carried out to continue the project Kinds of Tags, which intends to identify elements required for metadata originating from folksonomies and to propose an application profile for DC Social Tagging. The work provides information that may be used by software applications to assign tags to metadata elements and, therefore, means for tags to be conveniently gathered by metadata interoperability tools. Despite the unquestionably high value of DC and the significance of the already existing properties in DC Terms, the pilot study show revealed a significant number of tags for which no corresponding properties yet existed. A need for new properties, such as Action, Depth, Rate, and Utility was determined. Those potential new properties will have to be validated in a later stage by the DC Social Tagging Community.
    Pages
    S.14-22
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  14. Keith, C.: Using XSLT to manipulate MARC metadata (2004) 0.03
    0.02858579 = product of:
      0.07146447 = sum of:
        0.028980678 = weight(_text_:it in 4747) [ClassicSimilarity], result of:
          0.028980678 = score(doc=4747,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 4747, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=4747)
        0.042483795 = weight(_text_:22 in 4747) [ClassicSimilarity], result of:
          0.042483795 = score(doc=4747,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 4747, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4747)
      0.4 = coord(2/5)
    
    Abstract
    This paper describes the MARCXML architecture implemented at the Library of Congress. It gives an overview of the component pieces of the architecture, including the MARCXML schema and the MARCXML toolkit, while giving a brief tutorial on their use. Several different applications of the architecture and tools are discussed to illustrate the features of the toolkit being developed thus far. Nearly any metadata format can take advantage of the features of the toolkit, and the process of the toolkit enabling a new format is discussed. Finally, this paper intends to foster new ideas with regards to the transformation of descriptive metadata, especially using XML tools. In this paper the following conventions will be used: MARC21 will refer to MARC 21 records in the ISO 2709 record structure used today; MARCXML will refer to MARC 21 records in an XML structure.
    Source
    Library hi tech. 22(2004) no.2, S.122-130
  15. Margaritopoulos, T.; Margaritopoulos, M.; Mavridis, I.; Manitsaris, A.: ¬A conceptual framework for metadata quality assessment (2008) 0.03
    0.02858579 = product of:
      0.07146447 = sum of:
        0.028980678 = weight(_text_:it in 2643) [ClassicSimilarity], result of:
          0.028980678 = score(doc=2643,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 2643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=2643)
        0.042483795 = weight(_text_:22 in 2643) [ClassicSimilarity], result of:
          0.042483795 = score(doc=2643,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 2643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2643)
      0.4 = coord(2/5)
    
    Abstract
    Metadata quality of digital resources in a repository is an issue directly associated with the repository's efficiency and value. In this paper, the subject of metadata quality is approached by introducing a new conceptual framework that defines it in terms of its fundamental components. Additionally, a method for assessing these components by exploiting structural and semantic relations among the resources is presented. These relations can be used to generate implied logic rules, which include, impose or prohibit certain values in the fields of a metadata record. The use of such rules can serve as a tool for conducting quality control in the records, in order to diagnose deficiencies and errors.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Baker, T.: Dublin Core Application Profiles : current approaches (2010) 0.03
    0.02858579 = product of:
      0.07146447 = sum of:
        0.028980678 = weight(_text_:it in 3737) [ClassicSimilarity], result of:
          0.028980678 = score(doc=3737,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 3737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=3737)
        0.042483795 = weight(_text_:22 in 3737) [ClassicSimilarity], result of:
          0.042483795 = score(doc=3737,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 3737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3737)
      0.4 = coord(2/5)
    
    Abstract
    The Dublin Core Metadata Initiative currently defines a Dublin Core Application Profile as a set of specifications about the metadata design of a particular application or for a particular domain or community of users. The current approach to application profiles is summarized in the Singapore Framework for Application Profiles [SINGAPORE-FRAMEWORK] (see Figure 1). While the approach originally developed as a means of specifying customized applications based on the fifteen elements of the Dublin Core Element Set (e.g., Title, Date, Subject), it has evolved into a generic approach to creating metadata that meets specific local requirements while integrating coherently with other RDF-based metadata.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  17. Jimenez, V.O.R.: Nuevas perspectivas para la catalogacion : metadatos ver MARC (1999) 0.02
    0.024032464 = product of:
      0.12016232 = sum of:
        0.12016232 = weight(_text_:22 in 5743) [ClassicSimilarity], result of:
          0.12016232 = score(doc=5743,freq=4.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.6565931 = fieldWeight in 5743, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=5743)
      0.2 = coord(1/5)
    
    Date
    30. 3.2002 19:45:22
    Source
    Revista Española de Documentaçion Cientifica. 22(1999) no.2, S.198-219
  18. Toth, M.B.; Emery, D.: Applying DCMI elements to digital images and text in the Archimedes Palimpsest Program (2008) 0.02
    0.023821492 = product of:
      0.059553728 = sum of:
        0.024150565 = weight(_text_:it in 2651) [ClassicSimilarity], result of:
          0.024150565 = score(doc=2651,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 2651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2651)
        0.035403162 = weight(_text_:22 in 2651) [ClassicSimilarity], result of:
          0.035403162 = score(doc=2651,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.19345059 = fieldWeight in 2651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2651)
      0.4 = coord(2/5)
    
    Abstract
    The digitized version of the only extant copy of Archimedes' key mathematical and scientific works contains over 6,500 images and 130 pages of transcriptions. Metadata is essential for managing, integrating and accessing these digital resources in the Web 2.0 environment. The Dublin Core Metadata Element Set meets many of our needs. It offers the needed flexibility and applicability to a variety of data sets containing different texts and images in a dynamic technical environment. The program team has continued to refine its data dictionary and elements based on the Dublin Core standard and feedback from the Dublin Core community since the 2006 Dublin Core Conference. This presentation cites the application and utility of the DCMI Standards during the final phase of this decade-long program. Since the 2006 conference, the amount of data has grown tenfold with new imaging techniques. Use of the DCMI Standards for integration across digital images and transcriptions will allow the hosting and integration of this data set and other cultural works across service providers, libraries and cultural institutions.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  19. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.023821492 = product of:
      0.059553728 = sum of:
        0.024150565 = weight(_text_:it in 4550) [ClassicSimilarity], result of:
          0.024150565 = score(doc=4550,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 4550, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.035403162 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
          0.035403162 = score(doc=4550,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.19345059 = fieldWeight in 4550, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
      0.4 = coord(2/5)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  20. Andresen, L.: Metadata in Denmark (2000) 0.02
    0.022658026 = product of:
      0.11329012 = sum of:
        0.11329012 = weight(_text_:22 in 4899) [ClassicSimilarity], result of:
          0.11329012 = score(doc=4899,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.61904186 = fieldWeight in 4899, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4899)
      0.2 = coord(1/5)
    
    Date
    16. 7.2000 20:58:22

Authors

Years

Languages

Types

  • a 180
  • el 27
  • m 15
  • s 8
  • b 2
  • x 2
  • More… Less…

Subjects