Search (121 results, page 1 of 7)

  • × theme_ss:"Metadaten"
  1. Understanding metadata (2004) 0.03
    0.026831081 = product of:
      0.13415541 = sum of:
        0.13415541 = sum of:
          0.094610326 = weight(_text_:etc in 2686) [ClassicSimilarity], result of:
            0.094610326 = score(doc=2686,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.47875473 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
          0.039545078 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
            0.039545078 = score(doc=2686,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.30952093 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
      0.2 = coord(1/5)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  2. Hsieh-Yee, I.: Cataloging and metatdata education in North American LIS programs (2004) 0.03
    0.025331676 = product of:
      0.06332919 = sum of:
        0.05097135 = product of:
          0.1019427 = sum of:
            0.1019427 = weight(_text_:exercises in 138) [ClassicSimilarity], result of:
              0.1019427 = score(doc=138,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.39288178 = fieldWeight in 138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=138)
          0.5 = coord(1/2)
        0.0123578375 = product of:
          0.024715675 = sum of:
            0.024715675 = weight(_text_:22 in 138) [ClassicSimilarity], result of:
              0.024715675 = score(doc=138,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19345059 = fieldWeight in 138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=138)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents findings of a survey an the state of cataloging and metadata education. in ALA-accredited library and information science progranis in North America. The survey was conducted in response to Action Item 5.1 of the "Bibliographic Control of Web Resources: A Library of Congress Action Plan," which focuses an providing metadata education to new LIS professionals. The study found LIS programs increased their reliance an introductory courses to cover cataloging and metadata, but fewer programs than before had a cataloging course requirement. The knowledge of cataloging delivered in introductory courses was basic, and the coverage of metadata was limited to an overview. Cataloging courses showed similarity in coverage and practice and focused an print mater!als. Few cataloging educators provided exercises in metadata record creation using non-AACR standards. Advanced cataloging courses provided in-depth coverage of subject cataloging and the cataloging of nonbook resources, but offered very limited coverage of metadata. Few programs offered full courses an metadata, and even fewer offered advanced metadata courses. Metadata topics were well integrated into LIS curricula, but coverage of metadata courses varied from program to program, depending an the interests of instructors. Educators were forward-looking and agreed an the inclusion of specific knowledge and skills in metadata instruction. A series of actions were proposed to assist educators in providing students with competencies in cataloging and metadata.
    Date
    10. 9.2000 17:38:22
  3. Miller, S.: Introduction to ontology concepts and terminology : DC-2013 Tutorial, September 2, 2013. (2013) 0.02
    0.023067001 = product of:
      0.115335 = sum of:
        0.115335 = product of:
          0.23067 = sum of:
            0.23067 = weight(_text_:exercises in 1075) [ClassicSimilarity], result of:
              0.23067 = score(doc=1075,freq=4.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.88899 = fieldWeight in 1075, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1075)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Tutorial topics and outline 1. Tutorial Background Overview The Semantic Web, Linked Data, and the Resource Description Framework 2. Ontology Basics and RDFS Tutorial Semantic modeling, domain ontologies, and RDF Vocabulary Description Language (RDFS) concepts and terminology Examples: domain ontologies, models, and schemas Exercises 3. OWL Overview Tutorial Web Ontology Language (OWL): selected concepts and terminology Exercises
  4. Caplan, P.; Guenther, R.: Metadata for Internet resources : the Dublin Core Metadata Elements Set and its mapping to USMARC (1996) 0.02
    0.0221726 = product of:
      0.0554315 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 2408) [ClassicSimilarity], result of:
              0.054937813 = score(doc=2408,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 2408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2408)
          0.5 = coord(1/2)
        0.027962593 = product of:
          0.055925187 = sum of:
            0.055925187 = weight(_text_:22 in 2408) [ClassicSimilarity], result of:
              0.055925187 = score(doc=2408,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.4377287 = fieldWeight in 2408, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2408)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper discuesses the goals and outcome of the OCLC/NCSA Metadata Workshop held March 1-3, 1995 in Dublin Ohio. The resulting proposed "Dublin Core" Metadata Elements Set is described briefly. An attempt is made to map the Dublin Core data elements to USMARC; problems and outstanding questions are noted.
    Date
    13. 1.2007 18:31:22
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.43-58
  5. Marchiori, M.: ¬The limits of Web metadata, and beyond (1998) 0.02
    0.020516803 = product of:
      0.05129201 = sum of:
        0.03399104 = product of:
          0.06798208 = sum of:
            0.06798208 = weight(_text_:problems in 3383) [ClassicSimilarity], result of:
              0.06798208 = score(doc=3383,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.4514426 = fieldWeight in 3383, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3383)
          0.5 = coord(1/2)
        0.017300973 = product of:
          0.034601945 = sum of:
            0.034601945 = weight(_text_:22 in 3383) [ClassicSimilarity], result of:
              0.034601945 = score(doc=3383,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2708308 = fieldWeight in 3383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3383)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Highlights 2 major problems of the WWW metadata: it will take some time before a reasonable number of people start using metadata to provide a better Web classification, and that no one can guarantee that a majority of the Web objects will be ever properly classified via metadata. Addresses the problem of how to cope with intrinsic limits of Web metadata, proposes a method to solve these problems and show evidence of its effectiveness. Examines the important problem of what is the required critical mass in the WWW for metadata in order for it to be really useful
    Date
    1. 8.1996 22:08:06
  6. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part I : achieving interoperability at the schema level (2006) 0.02
    0.018693518 = product of:
      0.046733793 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 1176) [ClassicSimilarity], result of:
              0.034336135 = score(doc=1176,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 1176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1176)
          0.5 = coord(1/2)
        0.029565725 = product of:
          0.05913145 = sum of:
            0.05913145 = weight(_text_:etc in 1176) [ClassicSimilarity], result of:
              0.05913145 = score(doc=1176,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2992217 = fieldWeight in 1176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1176)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The rapid growth of Internet resources and digital collections has been accompanied by a proliferation of metadata schemas, each of which has been designed based on the requirements of particular user communities, intended users, types of materials, subject domains, project needs, etc. Problems arise when building large digital libraries or repositories with metadata records that were prepared according to diverse schemas. This article (published in two parts) contains an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and applications, for the purposes of facilitating conversion and exchange of metadata and enabling cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level, record level, and repository level. Part I of the article intends to explain possible situations in which metadata schemas may be created or implemented, whether in individual projects or in integrated repositories. It also discusses approaches used at the schema level. Part II of the article will discuss metadata interoperability efforts at the record and repository levels.
  7. Caplan, P.; Guenther, R.: Metadata for Internet resources : the Dublin Core Metadata Elements Set and its mapping to USMARC (1996) 0.02
    0.016534507 = product of:
      0.04133627 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 6128) [ClassicSimilarity], result of:
              0.048070587 = score(doc=6128,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 6128, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6128)
          0.5 = coord(1/2)
        0.017300973 = product of:
          0.034601945 = sum of:
            0.034601945 = weight(_text_:22 in 6128) [ClassicSimilarity], result of:
              0.034601945 = score(doc=6128,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2708308 = fieldWeight in 6128, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6128)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Discusses the goals and outcome of the OCLC/NCSA Metadata Workshop, held in Dublin, Ohio, 1-3 Mar 95, which resulted in the proposed 'Dublin Core' Metadata Elements. Describes an attempt to map the Dublin Core data elements to the USMARC format (with particular reference to USMARC field 856 for electronic locations), noting problems and outstanding questions. The USMARC format elements considered include: subject, title, author, other-agent, publisher, publication date, identifier, object-type, form, relation, language, source, coverage, and other issues
    Series
    Cataloging and classification quarterly; vol.22, nos.3/4
  8. Hakala, J.: Dublin core in 1997 : a report from Dublin Core metadata workshops 4 & 5 (1998) 0.02
    0.016534507 = product of:
      0.04133627 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 2220) [ClassicSimilarity], result of:
              0.048070587 = score(doc=2220,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 2220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2220)
          0.5 = coord(1/2)
        0.017300973 = product of:
          0.034601945 = sum of:
            0.034601945 = weight(_text_:22 in 2220) [ClassicSimilarity], result of:
              0.034601945 = score(doc=2220,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2708308 = fieldWeight in 2220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2220)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Creation of more and better metadata, or resource descriptions, is the best means to solve the problems of massive recall and lack of precision associated with Internet information retrieval. Dublin Core Metadata workshops aim to develop the resource descriptions. Describes the 4th workshop held in Canberra March 1997 and the 5th held in Oct. 1997 in Helsinki. DC-4 dealt with element structure with qualifiers language, scheme and type; extensibility issues; and element refinement. DC-5 dealt with element refinement and stability; definition of sub-elements and resource types; and sharing of Dublin Core implementation experiences, one of which is the Nordic Metadata project. The Nordic countries are now well prepared to implement useful new tools built by the Internet metadata community
    Source
    Nordinfo Nytt. 1997, nos.3/4, S.10-22
  9. Wisser, K.M.; O'Brien Roper, J.: Maximizing metadata : exploring the EAD-MARC relationship (2003) 0.01
    0.011810362 = product of:
      0.029525906 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 154) [ClassicSimilarity], result of:
              0.034336135 = score(doc=154,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=154)
          0.5 = coord(1/2)
        0.0123578375 = product of:
          0.024715675 = sum of:
            0.024715675 = weight(_text_:22 in 154) [ClassicSimilarity], result of:
              0.024715675 = score(doc=154,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19345059 = fieldWeight in 154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=154)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Encoded Archival Description (EAD) has provided a new way to approach manuscript and archival collection representation. A review of previous representational practices and problems highlights the benefits of using EAD. This new approach should be considered a partner rather than an adversary in the access providing process. Technological capabilities now allow for multiple metadata schemas to be employed in the creation of the finding aid. Crosswalks allow for MARC records to be generated from the detailed encoding of an EAD finding aid. In the process of creating these crosswalks and detailed encoding, EAD has generated more changes in traditional processes and procedures than originally imagined. The North Carolina State University (NCSU) Libraries sought to test the process of crosswalking EAD to MARC, investigating how this process used technology as well as changed physical procedures. By creating a complex and indepth EAD template for finding aids, with accompanying related encoding analogs embedded within the element structure, MARC records were generated that required minor editing and revision for inclusion in the NCSU Libraries OPAC. The creation of this bridge between EAD and MARC has stimulated theoretical discussions about the role of collaboration, technology, and expertise in the ongoing struggle to maximize access to our collections. While this study is a only a first attempt at harnessing this potential, a presentation of the tensions, struggles, and successes provides illumination to some of the larger issues facing special collections today.
    Date
    10. 9.2000 17:38:22
  10. Kim, H.L.; Scerri, S.; Breslin, J.G.; Decker, S.; Kim, H.G.: ¬The state of the art in tag ontologies : a semantic model for tagging and folksonomies (2008) 0.01
    0.011810362 = product of:
      0.029525906 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 2650) [ClassicSimilarity], result of:
              0.034336135 = score(doc=2650,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 2650, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2650)
          0.5 = coord(1/2)
        0.0123578375 = product of:
          0.024715675 = sum of:
            0.024715675 = weight(_text_:22 in 2650) [ClassicSimilarity], result of:
              0.024715675 = score(doc=2650,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19345059 = fieldWeight in 2650, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2650)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    There is a growing interest into how we represent and share tagging data in collaborative tagging systems. Conventional tags, meaning freely created tags that are not associated with a structured ontology, are not naturally suited for collaborative processes, due to linguistic and grammatical variations, as well as human typing errors. Additionally, tags reflect personal views of the world by individual users, and are not normalised for synonymy, morphology or any other mapping. Our view is that the conventional approach provides very limited semantic value for collaboration. Moreover, in cases where there is some semantic value, automatically sharing semantics via computer manipulations is extremely problematic. This paper explores these problems by discussing approaches for collaborative tagging activities at a semantic level, and presenting conceptual models for collaborative tagging activities and folksonomies. We present criteria for the comparison of existing tag ontologies and discuss their strengths and weaknesses in relation to these criteria.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Ecker, R.: ¬Das digitale Buch im Internet : Methoden der Erfassung, Aufbereitung und Bereitstellung (1998) 0.01
    0.009461033 = product of:
      0.047305163 = sum of:
        0.047305163 = product of:
          0.094610326 = sum of:
            0.094610326 = weight(_text_:etc in 1511) [ClassicSimilarity], result of:
              0.094610326 = score(doc=1511,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.47875473 = fieldWeight in 1511, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1511)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Beschreibt den Prozeß der digitalisierten Aufbereitung von Text und Bild etc. über Scannen
  12. Jimenez, V.O.R.: Nuevas perspectivas para la catalogacion : metadatos ver MARC (1999) 0.01
    0.008388779 = product of:
      0.041943893 = sum of:
        0.041943893 = product of:
          0.083887786 = sum of:
            0.083887786 = weight(_text_:22 in 5743) [ClassicSimilarity], result of:
              0.083887786 = score(doc=5743,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.6565931 = fieldWeight in 5743, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5743)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    30. 3.2002 19:45:22
    Source
    Revista Española de Documentaçion Cientifica. 22(1999) no.2, S.198-219
  13. Tammaro, A.M.: Catalogando, catalogando ... metacatalogando (1997) 0.01
    0.008278403 = product of:
      0.041392017 = sum of:
        0.041392017 = product of:
          0.082784034 = sum of:
            0.082784034 = weight(_text_:etc in 902) [ClassicSimilarity], result of:
              0.082784034 = score(doc=902,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.41891038 = fieldWeight in 902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=902)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A crucial question for librarians is whether to catalogue Internet information sources, and electronic sources in general, which may contain metainformation of the texts of articles. Librarians can help researchers with data identification and access in 4 ways: making OPAC available on the Internet; providing a complete selection of Gopher, Ftp, WWW, etc. site lists; maintaining a Web site, coordinateted by the library, that functions as an Internet access point; and organising access to existing search engines that do automatic indexing. Briefly reviews several metadata formats, including USMARC field 856, IAFA templates, SOIP (Harvest), TEI Headers, Capcas Head and URC
  14. Jul, E.: MARC and mark-up : different metadata containers for different purposes (2003) 0.01
    0.008278403 = product of:
      0.041392017 = sum of:
        0.041392017 = product of:
          0.082784034 = sum of:
            0.082784034 = weight(_text_:etc in 5509) [ClassicSimilarity], result of:
              0.082784034 = score(doc=5509,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.41891038 = fieldWeight in 5509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5509)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Discusses the development and implications of electronic resource description systems, including the familiar library standard, the MARC Format, and the newly developing Resource Description Format (RDF), as well as other non-library markup languages such as XML, HTML, SGML, etc. Explains the differences between content and container, and the kinds of rules needed for describing each. Closes by outlining clearly why it is important for librarians to reach out beyond the library community and participate in the development of metadata standards.
  15. Kaparova, N.; Shwartsman, M.: Creation of the electronic resources metadatabase in russia : problems and prospects (2000) 0.01
    0.008240673 = product of:
      0.04120336 = sum of:
        0.04120336 = product of:
          0.08240672 = sum of:
            0.08240672 = weight(_text_:problems in 5405) [ClassicSimilarity], result of:
              0.08240672 = score(doc=5405,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5472311 = fieldWeight in 5405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5405)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  16. Miller, S.J.: Metadata for digital collections : a how-to-do-it manual (2011) 0.01
    0.008155417 = product of:
      0.040777083 = sum of:
        0.040777083 = product of:
          0.08155417 = sum of:
            0.08155417 = weight(_text_:exercises in 4911) [ClassicSimilarity], result of:
              0.08155417 = score(doc=4911,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31430542 = fieldWeight in 4911, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4911)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    More and more libraries, archives, and museums are creating online collections of digitized resources. Where can those charged with organizing these new collections turn for guidance on the actual practice of metadata design and creation? "Metadata for Digital Collections: A How-to-do-it Manual" is suitable for libraries, archives, and museums. This practical, hands-on volume will make it easy for readers to acquire the knowledge and skills they need, whether they use the book on the job or in a classroom. Author Steven Miller introduces readers to fundamental concepts and practices in a style accessible to beginners and LIS students, as well as experienced practitioners with little metadata training. He also takes account of the widespread use of digital collection management systems such as CONTENTdm. Rather than surveying a large number of metadata schemes, Miller covers only three of the schemes most commonly used in general digital resource description, namely, Dublin Core, MODS, and VRA. By limiting himself, Miller is able to address the chosen schemes in greater depth. He is also able to include numerous practical examples that clarify common application issues and challenges. He provides practical guidance on applying each of the Dublin Core elements, taking special care to clarify those most commonly misunderstood. The book includes a step-by-step guide on how to design and document a metadata scheme for local institutional needs and for specific digital collection projects. The text also serves well as an introduction to broader metadata topics, including XML encoding, mapping between different schemes, metadata interoperability and record sharing, OAI harvesting, and the emerging environment of Linked Data and the Semantic Web, explaining their relevance to current practitioners and students. Each chapter offers a set of exercises, with suggestions for instructors. A companion website includes additional practical and reference resources.
  17. Hooland, S. van; Verborgh, R.: Linked data for Lilibraries, archives and museums : how to clean, link, and publish your metadata (2014) 0.01
    0.008155417 = product of:
      0.040777083 = sum of:
        0.040777083 = product of:
          0.08155417 = sum of:
            0.08155417 = weight(_text_:exercises in 5153) [ClassicSimilarity], result of:
              0.08155417 = score(doc=5153,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31430542 = fieldWeight in 5153, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5153)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited resources. Readers will learn how to critically assess and use (semi-)automated methods of managing metadata through hands-on exercises within the book and on the accompanying website. Each chapter is built around a case study from institutions around the world, demonstrating how freely available tools are being successfully used in different metadata contexts. This handbook delivers the necessary conceptual and practical understanding to empower practitioners to make the right decisions when making their organisations resources accessible on the Web. Key topics include, the value of metadata; metadata creation - architecture, data models and standards; metadata cleaning; metadata reconciliation; metadata enrichment through Linked Data and named-entity recognition; importing and exporting metadata; ensuring a sustainable publishing model. This will be an invaluable guide for metadata practitioners and researchers within all cultural heritage contexts, from library cataloguers and archivists to museum curatorial staff. It will also be of interest to students and academics within information science and digital humanities fields. IT managers with responsibility for information systems, as well as strategy heads and budget holders, at cultural heritage organisations, will find this a valuable decision-making aid.
  18. Andresen, L.: Metadata in Denmark (2000) 0.01
    0.007909016 = product of:
      0.039545078 = sum of:
        0.039545078 = product of:
          0.079090156 = sum of:
            0.079090156 = weight(_text_:22 in 4899) [ClassicSimilarity], result of:
              0.079090156 = score(doc=4899,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.61904186 = fieldWeight in 4899, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4899)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    16. 7.2000 20:58:22
  19. MARC and metadata : METS, MODS, and MARCXML: current and future implications (2004) 0.01
    0.007909016 = product of:
      0.039545078 = sum of:
        0.039545078 = product of:
          0.079090156 = sum of:
            0.079090156 = weight(_text_:22 in 2840) [ClassicSimilarity], result of:
              0.079090156 = score(doc=2840,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.61904186 = fieldWeight in 2840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2840)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Library hi tech. 22(2004) no.1
  20. Bueno-de-la-Fuente, G.; Hernández-Pérez, T.; Rodríguez-Mateos, D.; Méndez-Rodríguez, E.M.; Martín-Galán, B.: Study on the use of metadata for digital learning objects in University Institutional Repositories (MODERI) (2009) 0.01
    0.007095774 = product of:
      0.03547887 = sum of:
        0.03547887 = product of:
          0.07095774 = sum of:
            0.07095774 = weight(_text_:etc in 2981) [ClassicSimilarity], result of:
              0.07095774 = score(doc=2981,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.35906604 = fieldWeight in 2981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2981)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Metadata is a core issue for the creation of repositories. Different institutional repositories have chosen and use different metadata models, elements and values for describing the range of digital objects they store. Thus, this paper analyzes the current use of metadata describing those Learning Objects that some open higher educational institutions' repositories include in their collections. The goal of this work is to identify and analyze the different metadata models being used to describe educational features of those specific digital educational objects (such as audience, type of educational material, learning objectives, etc.). Also discussed is the concept and typology of Learning Objects (LO) through their use in University Repositories. We will also examine the usefulness of specifically describing those learning objects, setting them apart from other kind of documents included in the repository, mainly scholarly publications and research results of the Higher Education institution.

Authors

Years

Languages

  • e 105
  • d 12
  • i 1
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 108
  • el 12
  • m 6
  • s 6
  • b 2
  • p 1
  • More… Less…