Search (205 results, page 1 of 11)

  • × theme_ss:"Metadaten"
  1. Hooland, S. van; Bontemps, Y.; Kaufman, S.: Answering the call for more accountability : applying data profiling to museum metadata (2008) 0.05
    0.052644387 = product of:
      0.1228369 = sum of:
        0.031131983 = weight(_text_:management in 2644) [ClassicSimilarity], result of:
          0.031131983 = score(doc=2644,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 2644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2644)
        0.07490338 = weight(_text_:case in 2644) [ClassicSimilarity], result of:
          0.07490338 = score(doc=2644,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.41216385 = fieldWeight in 2644, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=2644)
        0.016801544 = product of:
          0.033603087 = sum of:
            0.033603087 = weight(_text_:22 in 2644) [ClassicSimilarity], result of:
              0.033603087 = score(doc=2644,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.23214069 = fieldWeight in 2644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2644)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Although the issue of metadata quality is recognized as an important topic within the metadata research community, the cultural heritage sector has been slow to develop methodologies, guidelines and tools for addressing this topic in practice. This paper concentrates on metadata quality specifically within the museum sector and describes the potential of data-profiling techniques for metadata quality evaluation. A case study illustrates the application of a generalpurpose data-profiling tool on a large collection of metadata records from an ethnographic collection. After an analysis of the results of the case-study the paper reviews further steps in our research and presents the implementation of a metadata quality tool within an open-source collection management software.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Kleeck, D. Van; Langford, G.; Lundgren, J.; Nakano, H.; O'Dell, A.J.; Shelton, T.: Managing bibliographic data quality in a consortial academic library : a case study (2016) 0.04
    0.039643552 = product of:
      0.13875243 = sum of:
        0.051365152 = weight(_text_:management in 5133) [ClassicSimilarity], result of:
          0.051365152 = score(doc=5133,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.36866072 = fieldWeight in 5133, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5133)
        0.08738727 = weight(_text_:case in 5133) [ClassicSimilarity], result of:
          0.08738727 = score(doc=5133,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.48085782 = fieldWeight in 5133, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5133)
      0.2857143 = coord(2/7)
    
    Abstract
    This article presents a case study of quality management for print and electronic resource metadata, summarizing problems and solutions encountered by the Cataloging and Discovery Services Department in the George A. Smathers Libraries at the University of Florida. The authors discuss national, state, and local standards for cataloging, automated and manual record enhancements for data, user feedback, and statewide consortial factors. Findings show that adherence to standards, proactive cleanup of data via manual processes and automated tools, collaboration with vendors and stakeholders, and continual assessment of workflows are key to the management of biblio-graphic data quality in consortial academic libraries.
  3. Chen, Y.N.; Chen, S.J.: ¬A metadata practice of the OFLA FRBR model : a case study for the National Palace Museum in Taipai (2004) 0.04
    0.035105575 = product of:
      0.12286951 = sum of:
        0.031131983 = weight(_text_:management in 3384) [ClassicSimilarity], result of:
          0.031131983 = score(doc=3384,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 3384, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=3384)
        0.09173752 = weight(_text_:case in 3384) [ClassicSimilarity], result of:
          0.09173752 = score(doc=3384,freq=6.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.50479555 = fieldWeight in 3384, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=3384)
      0.2857143 = coord(2/7)
    
    Abstract
    In 1998, the Functional Requirements for Bibliographic Records (FRBR) model which is composed of four entities (work, expression, manifestation and item) and their associative relationships (primary, responsibility and subject), was proposed by the International Federation of Library Associations and Institutions (IFLA). The FRBR model can be deployed as a logical framework for proceeding metadata analysis and developing metadata format. This paper presents a case study of the National Palace Museum (NPM) in Taipei to examine the feasibility of the FRBR model. Based on the examination of case study at the NPM, the FRBR model is proven to be a useful and fundamental framework for metadata analysis and implementation. Findings show that the FRBR model is helpful in identifying proper metadata elements organization and their distribution over the FRBR entities. The model is more suitable for media-centric and association-rich contents. However, in order to refine the FRBR model as a common framework for metadata, it would also require supportive mechanisms for management responsibility relationships for the workflow consideration and refine the distinction between work and expression entity.
  4. Chivers, A.; Feather, J.: ¬The management of digital data : a metadata approach (1998) 0.03
    0.031353693 = product of:
      0.10973792 = sum of:
        0.07337879 = weight(_text_:management in 2363) [ClassicSimilarity], result of:
          0.07337879 = score(doc=2363,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.5266582 = fieldWeight in 2363, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.078125 = fieldNorm(doc=2363)
        0.03635913 = product of:
          0.07271826 = sum of:
            0.07271826 = weight(_text_:studies in 2363) [ClassicSimilarity], result of:
              0.07271826 = score(doc=2363,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.44086722 = fieldWeight in 2363, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2363)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Reports on a research study, conducted at the Department of Information and Library Studies, Loughborough University, to investigate the potential of metadata for universal data management and explore the attitudes of UK information professionals to these issues
  5. Tallerås, K.; Massey, D.; Husevåg, A.-S.R.; Preminger, M.; Pharo, N.: Evaluating (linked) metadata transformations across cultural heritage domains (2014) 0.03
    0.03021575 = product of:
      0.10575512 = sum of:
        0.07490338 = weight(_text_:case in 1588) [ClassicSimilarity], result of:
          0.07490338 = score(doc=1588,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.41216385 = fieldWeight in 1588, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1588)
        0.030851744 = product of:
          0.06170349 = sum of:
            0.06170349 = weight(_text_:studies in 1588) [ClassicSimilarity], result of:
              0.06170349 = score(doc=1588,freq=4.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.37408823 = fieldWeight in 1588, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1588)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper describes an approach to the evaluation of different aspects in the transformation of existing metadata into Linked data-compliant knowledge bases. At Oslo and Akershus University College of Applied Science, in the TORCH project, we are working on three different experimental case studies on extraction and mapping of broadcasting data and the interlinking of these with transformed library data. The case studies are investigating problems of heterogeneity and ambiguity in and between the domains, as well as problems arising in the interlinking process. The proposed approach makes it possible to collaborate on evaluation across different experiments, and to rationalize and streamline the process.
  6. Chen, Y.-n.; Chen, S.-j.: ¬A metadata practice of the IFLA FRBR model : a case study for the National Palace Museum in Taipei (2004) 0.03
    0.029254649 = product of:
      0.102391265 = sum of:
        0.025943318 = weight(_text_:management in 4436) [ClassicSimilarity], result of:
          0.025943318 = score(doc=4436,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 4436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4436)
        0.07644795 = weight(_text_:case in 4436) [ClassicSimilarity], result of:
          0.07644795 = score(doc=4436,freq=6.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.420663 = fieldWeight in 4436, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4436)
      0.2857143 = coord(2/7)
    
    Abstract
    In 1998, the Functional Requirements for Bibliographic Records (FRBR) model which is composed of four entities (work, expression, manifestation and item) and their associative relationships (primary, responsibility and subject), was proposed by the International Federation of Library Associations and Institutions (IFLA). The FRBR model can be deployed as a logical framework for proceeding metadata analysis and developing metadata format. This paper presents a case study of the National Palace Museum (NPM) in Taipei to examine the feasibility of the FRBR model. Based on the examination of case study at the NPM, the FRBR model is proven to be a useful and fundamental framework for metadata analysis and implementation. Findings show that the FRBR model is helpful in identifying proper metadata elements organization and their distribution over the FRBR entities. The model is more suitable for media-centric and association-rich contents. However, in order to refine the FRBR model as a common framework for metadata, it would also require supportive mechanisms for management responsibility relationships for the workflow consideration and refine the distinction between work and expression entity.
  7. Kleeck, D. Van; Nakano, H.; Langford, G.; Shelton, T.; Lundgren, J.; O'Dell, A.J.: Managing bibliographic data quality for electronic resources (2017) 0.03
    0.028032223 = product of:
      0.09811278 = sum of:
        0.036320645 = weight(_text_:management in 5160) [ClassicSimilarity], result of:
          0.036320645 = score(doc=5160,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2606825 = fieldWeight in 5160, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5160)
        0.061792135 = weight(_text_:case in 5160) [ClassicSimilarity], result of:
          0.061792135 = score(doc=5160,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34001783 = fieldWeight in 5160, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5160)
      0.2857143 = coord(2/7)
    
    Abstract
    This article presents a case study of quality management issues for electronic resource metadata to assess the support of user tasks (find, select, and obtain library resources) and potential for increased efficiencies in acquisitions and cataloging workflows. The authors evaluated the quality of existing bibliographic records (mostly vendor supplied) for e-resource collections as compared with records for the same collections in OCLC's WorldShare Collection Manager (WCM). Findings are that WCM records better support user tasks by containing more summaries and tables of contents; other checkpoints are largely comparable between the two source record groups. The transition to WCM records is discussed.
  8. Heng, G.; Cole, T.W.; Tian, T.(C.); Han, M.-J.: Rethinking authority reconciliation process (2022) 0.03
    0.028032223 = product of:
      0.09811278 = sum of:
        0.036320645 = weight(_text_:management in 727) [ClassicSimilarity], result of:
          0.036320645 = score(doc=727,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2606825 = fieldWeight in 727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=727)
        0.061792135 = weight(_text_:case in 727) [ClassicSimilarity], result of:
          0.061792135 = score(doc=727,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34001783 = fieldWeight in 727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=727)
      0.2857143 = coord(2/7)
    
    Abstract
    Entity identity management and name reconciliation are intrinsic to both Linked Open Data (LOD) and traditional library authority control. Does this mean that LOD sources can facilitate authority control? This Emblematica Online case study examines the utility of five LOD sources for name reconciliation, comparing design differences regarding ontologies, linking models, and entity properties. It explores the challenges of name reconciliation in the LOD environment and provides lessons learned during a semi-automated name reconciliation process. It also briefly discusses the potential values and benefits of LOD authorities to the authority reconciliation process itself and library services in general.
  9. Dekkers, M.: Dublin Core and the rights management issue (2000) 0.03
    0.027711991 = product of:
      0.09699196 = sum of:
        0.044027276 = weight(_text_:management in 4453) [ClassicSimilarity], result of:
          0.044027276 = score(doc=4453,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.31599492 = fieldWeight in 4453, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4453)
        0.052964687 = weight(_text_:case in 4453) [ClassicSimilarity], result of:
          0.052964687 = score(doc=4453,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 4453, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4453)
      0.2857143 = coord(2/7)
    
    Abstract
    Management of rights in electronic resources on the Internet is a complex issue. this can be considered almost universal knowledge, as paraphrases of this statement can be found in many discussions on this subject. This being the case, it is not surprising that a definition, operational solution to the problem has yet to be found. In one of the world's leading metadata initiatives, the Dublin Core Metadata Initiative, discussions on this topic over several years have failed to reach a conclusion. Some people think the issue is simply too complex to handle, others that the provision of simple shortcuts to more detailed information should be sufficient. It could be argued that a solution to the issue is in fact out of scope for the Dublin Core element set, in so far as it aims only to establish a core set of descriptive metadata for resource discovery
  10. Sutton, S.A.: Metadata quality, utility and the Semantic Web : the case of learning resources and achievement standards (2008) 0.02
    0.024926724 = product of:
      0.08724353 = sum of:
        0.061792135 = weight(_text_:case in 801) [ClassicSimilarity], result of:
          0.061792135 = score(doc=801,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34001783 = fieldWeight in 801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=801)
        0.02545139 = product of:
          0.05090278 = sum of:
            0.05090278 = weight(_text_:studies in 801) [ClassicSimilarity], result of:
              0.05090278 = score(doc=801,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.30860704 = fieldWeight in 801, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=801)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This article explores metadata quality issues in the creation and encoding of mappings or correlations of educational resources to K-12 achievement standards and the deployment of the metadata generated on the Semantic Web. The discussion is framed in terms of quality indicia derived from empirical studies of metadata in the Web environment. A number of forces at work in determining the quality of correlations metadata are examined including the nature of the emerging Semantic Web metadata ecosystem itself, the reliance on string values in metadata to identify achievement standards, the growing complexity of the standards environment, and the misalignment in terms of granularity between resource and declared objectives.
  11. Kurth, M.; Ruddy, D.; Rupp, N.: Repurposing MARC metadata : using digital project experience to develop a metadata management design (2004) 0.02
    0.024689937 = product of:
      0.08641478 = sum of:
        0.06961323 = weight(_text_:management in 4748) [ClassicSimilarity], result of:
          0.06961323 = score(doc=4748,freq=10.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.49963182 = fieldWeight in 4748, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
        0.016801544 = product of:
          0.033603087 = sum of:
            0.033603087 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
              0.033603087 = score(doc=4748,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.23214069 = fieldWeight in 4748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4748)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non-MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non-MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
    Source
    Library hi tech. 22(2004) no.2, S.144-152
  12. Maron, D.; Feinberg, M.: What does it mean to adopt a metadata standard? : a case study of Omeka and the Dublin Core (2018) 0.02
    0.023403717 = product of:
      0.08191301 = sum of:
        0.020754656 = weight(_text_:management in 4248) [ClassicSimilarity], result of:
          0.020754656 = score(doc=4248,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.14896142 = fieldWeight in 4248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4248)
        0.061158355 = weight(_text_:case in 4248) [ClassicSimilarity], result of:
          0.061158355 = score(doc=4248,freq=6.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.3365304 = fieldWeight in 4248, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=4248)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in this case, Dublin Core) can result in contrasting rhetorical arguments regarding metadata utility, quality, and reliability. In the Omeka example, the author illustrate a conceptual disconnect in how two metadata stakeholders - standards creators and standards users - operationalize metadata quality. For standards creators such as the Dublin Core community, metadata quality involves implementing a standard properly, according to established usage principles; in contrast, for standards users like Omeka, metadata quality involves mere adoption of the standard, with little consideration of proper usage and accompanying principles. Design/methodology/approach The paper uses an approach based on rhetorical criticism. The paper aims to establish whether Omeka's given ends (the position that Omeka claims to take regarding Dublin Core) align with Omeka's guiding ends (Omeka's actual argument regarding Dublin Core). To make this assessment, the paper examines both textual evidence (what Omeka says) and material-discursive evidence (what Omeka does). Findings The evidence shows that, while Omeka appears to argue that adopting the Dublin Core is an integral part of Omeka's mission, the platform's lack of support for Dublin Core implementation makes an opposing argument. Ultimately, Omeka argues that the appearance of adopting a standard is more important than its careful implementation. Originality/value This study contributes to our understanding of how metadata standards are understood and used in practice. The misalignment between Omeka's position and the goals of the Dublin Core community suggests that Omeka, and some portion of its users, do not value metadata interoperability and aggregation in the same way that the Dublin Core community does. This indicates that, although certain values regarding standards adoption may be pervasive in the metadata community, these values are not equally shared amongst all stakeholders in a digital library ecosystem. The way that standards creators (Dublin Core) understand what it means to "adopt a standard" is different from the way that standards users (Omeka) understand what it means to "adopt a standard."
  13. Smiraglia, R.P.: Content metadata : an analysis of Etruscan artifacts in a museum of archeology (2005) 0.02
    0.021365764 = product of:
      0.074780166 = sum of:
        0.052964687 = weight(_text_:case in 176) [ClassicSimilarity], result of:
          0.052964687 = score(doc=176,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=176)
        0.021815477 = product of:
          0.043630954 = sum of:
            0.043630954 = weight(_text_:studies in 176) [ClassicSimilarity], result of:
              0.043630954 = score(doc=176,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.26452032 = fieldWeight in 176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=176)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Metadata schemes target resources as information-packages, without attention to the distinction between content and carrier. Most schema are derived without empirical understanding of the concepts that need to be represented, the ways in which terms representing the central concepts might best be derived, and how metadata descriptions will be used for retrieval. Research is required to resolve this dilemma, and much research will be required if the plethora of schemes that already exist are to be made efficacious for resource description and retrieval. Here I report the results of a preliminary study, which was designed to see whether the bibliographic concept of "the work" could be of any relevance among artifacts held by a museum. I extend the "works metaphor" from the bibliographic to the artifactual domain, by altering the terms of the definition slightly, thus: 1) instantiation is understood as content genealogy. Case studies of Etruscan artifacts from the University of Pennsylvania Museum of Archaeology and Anthropology are used to demonstrate the inherence of the work in non-documentary artifacts.
  14. Rousidis, D.; Garoufallou, E.; Balatsoukas, P.; Sicilia, M.-A.: Evaluation of metadata in research data repositories : the case of the DC.Subject Element (2015) 0.02
    0.020023016 = product of:
      0.070080556 = sum of:
        0.025943318 = weight(_text_:management in 2392) [ClassicSimilarity], result of:
          0.025943318 = score(doc=2392,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 2392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2392)
        0.04413724 = weight(_text_:case in 2392) [ClassicSimilarity], result of:
          0.04413724 = score(doc=2392,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 2392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2392)
      0.2857143 = coord(2/7)
    
    Abstract
    Research Data repositories are growing in terms of volume rapidly and exponentially. Their main goal is to provide scientists the essential mechanism to store, share, and re-use datasets generated at various stages of the research process. Despite the fact that metadata play an important role for research data management in the context of these repositories, several factors - such as the big volume of data and its complex lifecycles, as well as operational constraints related to financial resources and human factors - may impede the effectiveness of several metadata elements. The aim of the research reported in this paper was to perform a descriptive analysis of the DC.Subject metadata element and to identify its data quality problems in the context of the Dryad research data repository. In order to address this aim a total of 4.557 packages and 13.638 data files were analysed following a data-preprocessing method. The findings showed emerging trends about the subject coverage of the repository (e.g. the most popular subjects and the authors that contributed the most for these subjects). Also, quality problems related to the lack of controlled vocabulary and standardisation were very common. This study has implications for the evaluation of metadata and the improvement of the quality of the research data annotation process.
  15. Social tagging in a linked data environment. Edited by Diane Rasmussen Pennington and Louise F. Spiteri. London, UK: Facet Publishing, 2018. 240 pp. £74.95 (paperback). (ISBN 9781783303380) (2019) 0.02
    0.019956294 = product of:
      0.069847025 = sum of:
        0.04413724 = weight(_text_:case in 101) [ClassicSimilarity], result of:
          0.04413724 = score(doc=101,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=101)
        0.025709787 = product of:
          0.051419575 = sum of:
            0.051419575 = weight(_text_:studies in 101) [ClassicSimilarity], result of:
              0.051419575 = score(doc=101,freq=4.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.3117402 = fieldWeight in 101, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=101)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Social tagging, hashtags, and geotags are used across a variety of platforms (Twitter, Facebook, Tumblr, WordPress, Instagram) in different countries and cultures. This book, representing researchers and practitioners across different information professions, explores how social tags can link content across a variety of environments. Most studies of social tagging have tended to focus on applications like library catalogs, blogs, and social bookmarking sites. This book, in setting out a theoretical background and the use of a series of case studies, explores the role of hashtags as a form of linked data?without the complex implementation of RDF and other Semantic Web technologies.
  16. Hert, C.A.; Denn, S.O.; Gillman, D.W.; Oh, J.S.; Pattuelli, M.C.; Hernandez, N.: Investigating and modeling metadata use to support information architecture development in the statistical knowledge network (2007) 0.02
    0.019690715 = product of:
      0.0689175 = sum of:
        0.031131983 = weight(_text_:management in 422) [ClassicSimilarity], result of:
          0.031131983 = score(doc=422,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 422, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=422)
        0.03778552 = product of:
          0.07557104 = sum of:
            0.07557104 = weight(_text_:studies in 422) [ClassicSimilarity], result of:
              0.07557104 = score(doc=422,freq=6.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.45816267 = fieldWeight in 422, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=422)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Metadata and an appropriate metadata model are nontrivial components of information architecture conceptualization and implementation, particularly when disparate and dispersed systems are integrated. Metadata availability can enhance retrieval processes, improve information organization and navigation, and support management of digital objects. To support these activities efficiently, metadata need to be modeled appropriately for the tasks. The authors' work focuses on how to understand and model metadata requirements to support the work of end users of an integrative statistical knowledge network (SKN). They report on a series of user studies. These studies provide an understanding of metadata elements necessary for a variety of user-oriented tasks, related business rules associated with the use of these elements, and their relationship to other perspectives on metadata model development. This work demonstrates the importance of the user perspective in this type of design activity and provides a set of strategies by which the results of user studies can be systematically utilized to support that design.
  17. Patton, M.; Reynolds, D.; Choudhury, G.S.; DiLauro, T.: Toward a metadata generation framework : a case study at Johns Hopkins University (2004) 0.02
    0.01847466 = product of:
      0.06466131 = sum of:
        0.029351516 = weight(_text_:management in 1192) [ClassicSimilarity], result of:
          0.029351516 = score(doc=1192,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.21066327 = fieldWeight in 1192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=1192)
        0.03530979 = weight(_text_:case in 1192) [ClassicSimilarity], result of:
          0.03530979 = score(doc=1192,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.1942959 = fieldWeight in 1192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=1192)
      0.2857143 = coord(2/7)
    
    Abstract
    In the June 2003 issue of D-Lib Magazine, Kenney et al. (2003) discuss a comparative study between Cornell's email reference staff and Google's Answers service. This interesting study provided insights on the potential impact of "computing and simple algorithms combined with human intelligence" for library reference services. As mentioned in the Kenney et al. article, Bill Arms (2000) had discussed the possibilities of automated digital libraries in an even earlier D-Lib article. Arms discusses not only automating reference services, but also another library function that seems to inspire lively debates about automation-metadata creation. While intended to illuminate, these debates sometimes generate more heat than light. In an effort to explore the potential for automating metadata generation, the Digital Knowledge Center (DKC) of the Sheridan Libraries at The Johns Hopkins University developed and tested an automated name authority control (ANAC) tool. ANAC represents a component of a digital workflow management system developed in connection with the digital Lester S. Levy Collection of Sheet Music. The evaluation of ANAC followed the spirit of the Kenney et al. study that was, as they stated, "more exploratory than scientific." These ANAC evaluation results are shared with the hope of fostering constructive dialogue and discussions about the potential for semi-automated techniques or frameworks for library functions and services such as metadata creation. The DKC's research agenda emphasizes the development of tools that combine automated processes and human intervention, with the overall goal of involving humans at higher levels of analysis and decision-making. Others have looked at issues regarding the automated generation of metadata. A session at the 2003 Joint Conference on Digital Libraries was devoted to automatic metadata creation, and a session at the 2004 conference addressed automated name disambiguation. Commercial vendors such as OCLC, Marcive, and LTI have long used automated techniques for matching names to Library of Congress authority records. We began developing ANAC as a component of a larger suite of open source tools to support workflow management for digital projects. This article describes the goals for the ANAC tool, provides an overview of the metadata records used for testing, describes the architecture for ANAC, and concludes with discussions of the methodology and evaluation of the experiment comparing human cataloging and ANAC-generated results.
  18. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.016611008 = product of:
      0.058138527 = sum of:
        0.04413724 = weight(_text_:case in 4550) [ClassicSimilarity], result of:
          0.04413724 = score(doc=4550,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 4550, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.0140012875 = product of:
          0.028002575 = sum of:
            0.028002575 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.028002575 = score(doc=4550,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  19. Gömpel, R.; Altenhöner, R.; Kunz, M.; Oehlschläger, S.; Werner, C.: Weltkongress Bibliothek und Information, 70. IFLA-Generalkonferenz in Buenos Aires : Aus den Veranstaltungen der Division IV Bibliographic Control, der Core Activities ICABS und UNIMARC sowie der Information Technology Section (2004) 0.02
    0.016256217 = product of:
      0.03793117 = sum of:
        0.014675758 = weight(_text_:management in 2874) [ClassicSimilarity], result of:
          0.014675758 = score(doc=2874,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.10533164 = fieldWeight in 2874, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.015625 = fieldNorm(doc=2874)
        0.017654896 = weight(_text_:case in 2874) [ClassicSimilarity], result of:
          0.017654896 = score(doc=2874,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.09714795 = fieldWeight in 2874, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.015625 = fieldNorm(doc=2874)
        0.005600515 = product of:
          0.01120103 = sum of:
            0.01120103 = weight(_text_:22 in 2874) [ClassicSimilarity], result of:
              0.01120103 = score(doc=2874,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.07738023 = fieldWeight in 2874, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2874)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    "Libraries: Tools for Education and Development" war das Motto der 70. IFLA-Generalkonferenz, dem Weltkongress Bibliothek und Information, der vom 22.-27. August 2004 in Buenos Aires, Argentinien, und damit erstmals in Lateinamerika stattfand. Rund 3.000 Teilnehmerinnen und Teilnehmer, davon ein Drittel aus spanischsprachigen Ländern, allein 600 aus Argentinien, besuchten die von der IFLA und dem nationalen Organisationskomitee gut organisierte Tagung mit mehr als 200 Sitzungen und Veranstaltungen. Aus Deutschland waren laut Teilnehmerverzeichnis leider nur 45 Kolleginnen und Kollegen angereist, womit ihre Zahl wieder auf das Niveau von Boston gesunken ist. Erfreulicherweise gab es nunmehr bereits im dritten Jahr eine deutschsprachige Ausgabe des IFLA-Express. Auch in diesem Jahr soll hier über die Veranstaltungen der Division IV Bibliographic Control berichtet werden. Die Arbeit der Division mit ihren Sektionen Bibliography, Cataloguing, Classification and Indexing sowie der neuen Sektion Knowledge Management bildet einen der Schwerpunkte der IFLA-Arbeit, die dabei erzielten konkreten Ergebnisse und Empfehlungen haben maßgeblichen Einfluss auf die tägliche Arbeit der Bibliothekarinnen und Bibliothekare. Erstmals wird auch ausführlich über die Arbeit der Core Activities ICABS und UNIMARC und der Information Technology Section berichtet.
    Content
    Classification and Indexing Section (Sektion Klassifikation und Indexierung) Die Working Group an Guidelines for Multilingual Thesauri hat ihre Arbeit abgeschlossen, die Richtlinien werden Ende 2004 im IFLAnet zur Verfügung stehen. Die 2003 ins Leben gerufene Arbeitsgruppe zu Mindeststandards der Inhaltserschließung in Nationalbibliographien hat sich in Absprache mit den Mitgliedern des Standing Committee auf den Namen "Guidelines for minimal requirements for subject access by national bibliographic agencies" verständigt. Als Grundlage der zukünftigen Arbeit soll der "Survey an Subject Heading Languages Used in National Libraries and Bibliographies" von Magda HeinerFreiling dienen. Davon ausgehend soll eruiert werden, welche Arten von Medienwerken mit welchen Instrumentarien und in welcher Tiefe erschlossen werden. Eine weitere Arbeitsgruppe der Sektion befasst sich mit dem sachlichen Zugriff auf Netzpublikationen (Working Group an Subject Access to Web Resources). Die Veranstaltung "Implementation and adaption of global tools for subject access to local needs" fand regen Zuspruch. Drei Vortragende zeigten auf, wie in ihrem Sprachgebiet die Subject Headings der Library of Congress (LoC) übernommen werden (Development of a Spanish subject heading list und Subject indexing in Sweden) bzw. wie sich die Zusammenarbeit mit der LoC gestalten lässt, um den besonderen terminologischen Bedürfnissen eines Sprach- und Kulturraums außerhalb der USA Rechnung zu tragen (The SACO Program in Latin America). Aus deutscher Sicht verdiente der Vortrag "Subject indexing between international standards and local context - the Italian case" besondere Beachtung. Die Entwicklung eines Regelwerks zur verbalen Sacherschließung und die Erarbeitung einer italienischen Schlagwortnormdatei folgen nämlich erklärtermaßen der deutschen Vorgehensweise mit RSWK und SWD.
    Knowledge Management Section (Sektion Wissensmanagement) Ziel der neuen Sektion ist es, die Entwicklung und Implementierung des Wissensmanagements in Bibliotheken und Informationszentren zu fördern. Die Sektion will dafür eine internationale Plattform für die professionelle Kommunikation bieten und damit das Thema bekannter und allgemein verständlicher machen. Auf diese Weise soll seine Bedeutung auch für Bibliotheken und die mit ihm arbeitenden Einrichtungen herausgestellt werden. IFLA-CDNL Alliance for Bibliographic Standards (ICABS) Ein Jahr nach ihrer Gründung in Berlin hat die IFLA Core Activity "IFLA-CDNL Alliance for Bibliographic Standards (ICABS)" in Buenos Aires zum ersten Mal das Spektrum ihrer Arbeitsfelder einem großen Fachpublikum vorgestellt. Die IFLA Core Activity UNIMARC, einer der Partner der Allianz, hatte am Donnerstagvormittag zu einer Veranstaltung unter dem Titel "The holdings record as a bibliographic control tool" geladen. Am Nachmittag des selben Tages fand unter dem Titel "The new IFLA-CDNL Alliance for Bibliographic Standards - umbrella for multifaceted activities: strategies and practical ways to improve international coordination" die umfassende ICABS-Veranstaltung statt, die von der Generaldirektorin Der Deutschen Bibliothek, Dr. Elisabeth Niggemann, moderiert wurde. Nachdem die Vorsitzende des Advisory Board in ihrem Vortrag auf die Entstehungsgeschichte der Allianz eingegangen war, gab sie einen kurzen Oberblick über die Organisation und die Arbeit von ICABS als Dach der vielfältigen Aktivitäten im Bereich bibliographischer Standards. Vertreter aller in ICABS zusammengeschlossener Bibliotheken stellten im Anschluss daran ihre Arbeitsbereiche und -ergebnisse vor.
  20. Stubley, P.: Cataloguing standards and metadata for e-commerce (1999) 0.01
    0.014675759 = product of:
      0.102730304 = sum of:
        0.102730304 = weight(_text_:management in 1915) [ClassicSimilarity], result of:
          0.102730304 = score(doc=1915,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.73732144 = fieldWeight in 1915, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=1915)
      0.14285715 = coord(1/7)
    
    Source
    Information management report. 1999, Dec., S.16-18
    Theme
    Information Resources Management

Years

Languages

Types

  • a 180
  • el 19
  • m 15
  • s 13
  • b 2
  • x 2
  • More… Less…

Subjects