Search (196 results, page 1 of 10)

  • × theme_ss:"Metadaten"
  • × type_ss:"a"
  1. Preminger, M.; Rype, I.; Ådland, M.K.; Massey, D.; Tallerås, K.: ¬The public library metadata landscape : the case of Norway 2017-2018 (2020) 0.04
    0.041197337 = product of:
      0.16478935 = sum of:
        0.08101445 = weight(_text_:libraries in 5802) [ClassicSimilarity], result of:
          0.08101445 = score(doc=5802,freq=12.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.6223308 = fieldWeight in 5802, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5802)
        0.0837749 = weight(_text_:case in 5802) [ClassicSimilarity], result of:
          0.0837749 = score(doc=5802,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.48085782 = fieldWeight in 5802, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5802)
      0.25 = coord(2/8)
    
    Abstract
    The aim of this paper is to gauge the cataloging practices within the public library sector seen from the catalog with Norway as a case, based on a sample of records from public libraries and cataloging agencies. Findings suggest that libraries make few changes to records they import from central agencies, and that larger libraries make more changes than smaller libraries. Findings also suggest that libraries catalog and modify records with their patrons in mind, and though the extent is not large, cataloging proficiency is still required in the public library domain, at least in larger libraries, in order to ensure correct and consistent metadata.
  2. Tallerås, K.; Massey, D.; Husevåg, A.-S.R.; Preminger, M.; Pharo, N.: Evaluating (linked) metadata transformations across cultural heritage domains (2014) 0.03
    0.03273997 = product of:
      0.13095988 = sum of:
        0.071807064 = weight(_text_:case in 1588) [ClassicSimilarity], result of:
          0.071807064 = score(doc=1588,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.41216385 = fieldWeight in 1588, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1588)
        0.05915282 = weight(_text_:studies in 1588) [ClassicSimilarity], result of:
          0.05915282 = score(doc=1588,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.37408823 = fieldWeight in 1588, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1588)
      0.25 = coord(2/8)
    
    Abstract
    This paper describes an approach to the evaluation of different aspects in the transformation of existing metadata into Linked data-compliant knowledge bases. At Oslo and Akershus University College of Applied Science, in the TORCH project, we are working on three different experimental case studies on extraction and mapping of broadcasting data and the interlinking of these with transformed library data. The case studies are investigating problems of heterogeneity and ambiguity in and between the domains, as well as problems arising in the interlinking process. The proposed approach makes it possible to collaborate on evaluation across different experiments, and to rationalize and streamline the process.
  3. Kleeck, D. Van; Langford, G.; Lundgren, J.; Nakano, H.; O'Dell, A.J.; Shelton, T.: Managing bibliographic data quality in a consortial academic library : a case study (2016) 0.03
    0.032637153 = product of:
      0.13054861 = sum of:
        0.04677371 = weight(_text_:libraries in 5133) [ClassicSimilarity], result of:
          0.04677371 = score(doc=5133,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.35930282 = fieldWeight in 5133, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5133)
        0.0837749 = weight(_text_:case in 5133) [ClassicSimilarity], result of:
          0.0837749 = score(doc=5133,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.48085782 = fieldWeight in 5133, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5133)
      0.25 = coord(2/8)
    
    Abstract
    This article presents a case study of quality management for print and electronic resource metadata, summarizing problems and solutions encountered by the Cataloging and Discovery Services Department in the George A. Smathers Libraries at the University of Florida. The authors discuss national, state, and local standards for cataloging, automated and manual record enhancements for data, user feedback, and statewide consortial factors. Findings show that adherence to standards, proactive cleanup of data via manual processes and automated tools, collaboration with vendors and stakeholders, and continual assessment of workflows are key to the management of biblio-graphic data quality in consortial academic libraries.
  4. Lorenzo, L.; Mak, L.; Smeltekop, N.: FAST Headings in MODS : Michigan State University libraries digital repository case study (2023) 0.03
    0.032637153 = product of:
      0.13054861 = sum of:
        0.04677371 = weight(_text_:libraries in 1177) [ClassicSimilarity], result of:
          0.04677371 = score(doc=1177,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.35930282 = fieldWeight in 1177, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1177)
        0.0837749 = weight(_text_:case in 1177) [ClassicSimilarity], result of:
          0.0837749 = score(doc=1177,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.48085782 = fieldWeight in 1177, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1177)
      0.25 = coord(2/8)
    
    Abstract
    The Michigan State University Libraries (MSUL) digital repository contains numerous collections of openly available material. Since 2016, the digital repository has been using Faceted Application of Subject Terminology (FAST) subject headings as its primary subject vocabulary in order to streamline faceting, display, and search. The MSUL FAST use case presents some challenges that are not addressed by existing MARC-focused FAST tools. This paper will outline the MSUL digital repository team's justification for including FAST headings in the digital repository as well as workflows for adding FAST headings to Metadata Object Description Schema (MODS) metadata, their maintenance, and utilization for discovery.
  5. Sutton, S.A.: Metadata quality, utility and the Semantic Web : the case of learning resources and achievement standards (2008) 0.03
    0.027009096 = product of:
      0.108036384 = sum of:
        0.059237804 = weight(_text_:case in 801) [ClassicSimilarity], result of:
          0.059237804 = score(doc=801,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34001783 = fieldWeight in 801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=801)
        0.048798583 = weight(_text_:studies in 801) [ClassicSimilarity], result of:
          0.048798583 = score(doc=801,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.30860704 = fieldWeight in 801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0546875 = fieldNorm(doc=801)
      0.25 = coord(2/8)
    
    Abstract
    This article explores metadata quality issues in the creation and encoding of mappings or correlations of educational resources to K-12 achievement standards and the deployment of the metadata generated on the Semantic Web. The discussion is framed in terms of quality indicia derived from empirical studies of metadata in the Web environment. A number of forces at work in determining the quality of correlations metadata are examined including the nature of the emerging Semantic Web metadata ecosystem itself, the reliance on string values in metadata to identify achievement standards, the growing complexity of the standards environment, and the misalignment in terms of granularity between resource and declared objectives.
  6. Smiraglia, R.P.: Content metadata : an analysis of Etruscan artifacts in a museum of archeology (2005) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 176) [ClassicSimilarity], result of:
          0.05077526 = score(doc=176,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=176)
        0.04182736 = weight(_text_:studies in 176) [ClassicSimilarity], result of:
          0.04182736 = score(doc=176,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=176)
      0.25 = coord(2/8)
    
    Abstract
    Metadata schemes target resources as information-packages, without attention to the distinction between content and carrier. Most schema are derived without empirical understanding of the concepts that need to be represented, the ways in which terms representing the central concepts might best be derived, and how metadata descriptions will be used for retrieval. Research is required to resolve this dilemma, and much research will be required if the plethora of schemes that already exist are to be made efficacious for resource description and retrieval. Here I report the results of a preliminary study, which was designed to see whether the bibliographic concept of "the work" could be of any relevance among artifacts held by a museum. I extend the "works metaphor" from the bibliographic to the artifactual domain, by altering the terms of the definition slightly, thus: 1) instantiation is understood as content genealogy. Case studies of Etruscan artifacts from the University of Pennsylvania Museum of Archaeology and Anthropology are used to demonstrate the inherence of the work in non-documentary artifacts.
  7. Boydston, J.M.K.; Leysen, J.M.: Observations on the catalogers' role in descriptive metadata creation in academic libraries (2006) 0.02
    0.023077954 = product of:
      0.092311814 = sum of:
        0.03307401 = weight(_text_:libraries in 232) [ClassicSimilarity], result of:
          0.03307401 = score(doc=232,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25406548 = fieldWeight in 232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=232)
        0.059237804 = weight(_text_:case in 232) [ClassicSimilarity], result of:
          0.059237804 = score(doc=232,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34001783 = fieldWeight in 232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=232)
      0.25 = coord(2/8)
    
    Abstract
    This article examines the case for the participation of catalogers in the creation of descriptive metadata. Metadata creation is an extension of the catalogers' existing skills, abilities, and knowledge. As such, it should be encouraged and supported. However, issues in this process, such as cost, supply of catalogers, and the need for further training will also be examined. The authors use examples from the literature and their own experiences in descriptive metadata creation. Suggestions for future research on the topic are included.
  8. Ashton, J.; Kent, C.: New approaches to subject indexing at the British Library (2017) 0.02
    0.023077954 = product of:
      0.092311814 = sum of:
        0.03307401 = weight(_text_:libraries in 5158) [ClassicSimilarity], result of:
          0.03307401 = score(doc=5158,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25406548 = fieldWeight in 5158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5158)
        0.059237804 = weight(_text_:case in 5158) [ClassicSimilarity], result of:
          0.059237804 = score(doc=5158,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34001783 = fieldWeight in 5158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5158)
      0.25 = coord(2/8)
    
    Abstract
    The constantly changing metadata landscape means that libraries need to re-think their approach to standards and subject analysis, to enable the discovery of vast areas of both print and digital content. This article presents a case study from the British Library that assesses the feasibility of adopting FAST (Faceted Application of Subject Terminology) to selectively extend the scope of subject indexing of current and legacy content, or implement FAST as a replacement for all LCSH in current cataloging workflows.
  9. Mi, X.M.; Pollock, B.M.: Metadata schema to facilitate linked data for 3D digital models of cultural heritage collections : a University of South Florida Libraries case study (2018) 0.02
    0.022716753 = product of:
      0.09086701 = sum of:
        0.040091753 = weight(_text_:libraries in 5171) [ClassicSimilarity], result of:
          0.040091753 = score(doc=5171,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.30797386 = fieldWeight in 5171, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=5171)
        0.05077526 = weight(_text_:case in 5171) [ClassicSimilarity], result of:
          0.05077526 = score(doc=5171,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 5171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=5171)
      0.25 = coord(2/8)
    
    Abstract
    The University of South Florida Libraries house and provide access to a collection of cultural heritage and 3D digital models. In an effort to provide greater access to these collections, a linked data project has been implemented. A metadata schema for the 3D cultural heritage objects which uses linked data is an excellent way to share these collections with other repositories, thus gaining global exposure and access to these valuable resources. This article will share the process of building the 3D cultural heritage metadata model as well as an assessment of the model and recommendations for future linked data projects.
  10. Hooland, S. van; Bontemps, Y.; Kaufman, S.: Answering the call for more accountability : applying data profiling to museum metadata (2008) 0.02
    0.021978518 = product of:
      0.08791407 = sum of:
        0.071807064 = weight(_text_:case in 2644) [ClassicSimilarity], result of:
          0.071807064 = score(doc=2644,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.41216385 = fieldWeight in 2644, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=2644)
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 2644) [ClassicSimilarity], result of:
              0.03221402 = score(doc=2644,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 2644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2644)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Although the issue of metadata quality is recognized as an important topic within the metadata research community, the cultural heritage sector has been slow to develop methodologies, guidelines and tools for addressing this topic in practice. This paper concentrates on metadata quality specifically within the museum sector and describes the potential of data-profiling techniques for metadata quality evaluation. A case study illustrates the application of a generalpurpose data-profiling tool on a large collection of metadata records from an ethnographic collection. After an analysis of the results of the case-study the paper reviews further steps in our research and presents the implementation of a metadata quality tool within an open-source collection management software.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Farney, T.: using Google Tag Manager to share code : Designing shareable tags (2019) 0.02
    0.020807797 = product of:
      0.08323119 = sum of:
        0.040918473 = weight(_text_:libraries in 5443) [ClassicSimilarity], result of:
          0.040918473 = score(doc=5443,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.3143245 = fieldWeight in 5443, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5443)
        0.042312715 = weight(_text_:case in 5443) [ClassicSimilarity], result of:
          0.042312715 = score(doc=5443,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 5443, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5443)
      0.25 = coord(2/8)
    
    Abstract
    Sharing code between libraries is not a new phenomenon and neither is Google Tag Manager (GTM). GTM launched in 2012 as a JavaScript and HTML manager with the intent of easing the implementation of different analytics trackers and marketing scripts on a website. However, it can be used to load other code using its tag system onto a website. It's a simple process to export and import tags facilitating the code sharing process without requiring a high degree of coding experience. The entire process involves creating the script tag in GTM, exporting the GTM content into a sharable export file for someone else to import into their library's GTM container, and finally publishing that imported file to push the code to the website it was designed for. This case study provides an example of designing and sharing a GTM container loaded with advanced Google Analytics configurations such as event tracking and custom dimensions for other libraries using the Summon discovery service. It also discusses processes for designing GTM tags for export, best practices on importing and testing GTM content created by other libraries and concludes with evaluating the pros and cons of encouraging GTM use.
  12. Lubas, R.L.; Wolfe, R.H.W.; Fleischman, M.: Creating metadata practices for MIT's OpenCourseWare Project (2004) 0.02
    0.019019343 = product of:
      0.07607737 = sum of:
        0.057285864 = weight(_text_:libraries in 2843) [ClassicSimilarity], result of:
          0.057285864 = score(doc=2843,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.4400543 = fieldWeight in 2843, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2843)
        0.018791512 = product of:
          0.037583023 = sum of:
            0.037583023 = weight(_text_:22 in 2843) [ClassicSimilarity], result of:
              0.037583023 = score(doc=2843,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.2708308 = fieldWeight in 2843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2843)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The MIT libraries were called upon to recommend a metadata scheme for the resources contained in MIT's OpenCourseWare (OCW) project. The resources in OCW needed descriptive, structural, and technical metadata. The SCORM standard, which uses IEEE Learning Object Metadata for its descriptive standard, was selected for its focus on educational objects. However, it was clear that the Libraries would need to recommend how the standard would be applied and adapted to accommodate needs that were not addressed in the standard's specifications. The newly formed MIT Libraries Metadata Unit adapted established practices from AACR2 and MARC traditions when facing situations in which there were no precedents to follow.
    Source
    Library hi tech. 22(2004) no.2, S.138-143
  13. Brugger, J.M.: Cataloging for digital libraries (1996) 0.02
    0.01873292 = product of:
      0.07493168 = sum of:
        0.05345567 = weight(_text_:libraries in 3689) [ClassicSimilarity], result of:
          0.05345567 = score(doc=3689,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.4106318 = fieldWeight in 3689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0625 = fieldNorm(doc=3689)
        0.021476014 = product of:
          0.042952027 = sum of:
            0.042952027 = weight(_text_:22 in 3689) [ClassicSimilarity], result of:
              0.042952027 = score(doc=3689,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.30952093 = fieldWeight in 3689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3689)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Using grant funding, some prominent creators of digital libraries have promised users of networked resources certain kinds of access. Some of this access finds a ready-made vehicle in USMARC, some of it in the TEI header, some of it has yet to find the most appropriate vehicle. In its quest to provide access to what users need, the cataloging community can show leadership by exploring the strength inherent in a metadata-providing system like the TEI header.
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.59-73
  14. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.02
    0.017042633 = product of:
      0.06817053 = sum of:
        0.037798867 = weight(_text_:libraries in 2845) [ClassicSimilarity], result of:
          0.037798867 = score(doc=2845,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.29036054 = fieldWeight in 2845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0625 = fieldNorm(doc=2845)
        0.03037167 = product of:
          0.06074334 = sum of:
            0.06074334 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
              0.06074334 = score(doc=2845,freq=4.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.4377287 = fieldWeight in 2845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  15. Patton, M.; Reynolds, D.; Choudhury, G.S.; DiLauro, T.: Toward a metadata generation framework : a case study at Johns Hopkins University (2004) 0.02
    0.016646238 = product of:
      0.06658495 = sum of:
        0.032734778 = weight(_text_:libraries in 1192) [ClassicSimilarity], result of:
          0.032734778 = score(doc=1192,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2514596 = fieldWeight in 1192, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03125 = fieldNorm(doc=1192)
        0.033850174 = weight(_text_:case in 1192) [ClassicSimilarity], result of:
          0.033850174 = score(doc=1192,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.1942959 = fieldWeight in 1192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=1192)
      0.25 = coord(2/8)
    
    Abstract
    In the June 2003 issue of D-Lib Magazine, Kenney et al. (2003) discuss a comparative study between Cornell's email reference staff and Google's Answers service. This interesting study provided insights on the potential impact of "computing and simple algorithms combined with human intelligence" for library reference services. As mentioned in the Kenney et al. article, Bill Arms (2000) had discussed the possibilities of automated digital libraries in an even earlier D-Lib article. Arms discusses not only automating reference services, but also another library function that seems to inspire lively debates about automation-metadata creation. While intended to illuminate, these debates sometimes generate more heat than light. In an effort to explore the potential for automating metadata generation, the Digital Knowledge Center (DKC) of the Sheridan Libraries at The Johns Hopkins University developed and tested an automated name authority control (ANAC) tool. ANAC represents a component of a digital workflow management system developed in connection with the digital Lester S. Levy Collection of Sheet Music. The evaluation of ANAC followed the spirit of the Kenney et al. study that was, as they stated, "more exploratory than scientific." These ANAC evaluation results are shared with the hope of fostering constructive dialogue and discussions about the potential for semi-automated techniques or frameworks for library functions and services such as metadata creation. The DKC's research agenda emphasizes the development of tools that combine automated processes and human intervention, with the overall goal of involving humans at higher levels of analysis and decision-making. Others have looked at issues regarding the automated generation of metadata. A session at the 2003 Joint Conference on Digital Libraries was devoted to automatic metadata creation, and a session at the 2004 conference addressed automated name disambiguation. Commercial vendors such as OCLC, Marcive, and LTI have long used automated techniques for matching names to Library of Congress authority records. We began developing ANAC as a component of a larger suite of open source tools to support workflow management for digital projects. This article describes the goals for the ANAC tool, provides an overview of the metadata records used for testing, describes the architecture for ANAC, and concludes with discussions of the methodology and evaluation of the experiment comparing human cataloging and ANAC-generated results.
  16. Husevag, A.-S.R.: Named entities in indexing : a case study of TV subtitles and metadata records (2016) 0.02
    0.016484251 = product of:
      0.065937005 = sum of:
        0.023624292 = weight(_text_:libraries in 3105) [ClassicSimilarity], result of:
          0.023624292 = score(doc=3105,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.18147534 = fieldWeight in 3105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3105)
        0.042312715 = weight(_text_:case in 3105) [ClassicSimilarity], result of:
          0.042312715 = score(doc=3105,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 3105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3105)
      0.25 = coord(2/8)
    
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  17. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.02
    0.016484251 = product of:
      0.065937005 = sum of:
        0.023624292 = weight(_text_:libraries in 3895) [ClassicSimilarity], result of:
          0.023624292 = score(doc=3895,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.18147534 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
        0.042312715 = weight(_text_:case in 3895) [ClassicSimilarity], result of:
          0.042312715 = score(doc=3895,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.25 = coord(2/8)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  18. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.02
    0.016420944 = product of:
      0.065683775 = sum of:
        0.037798867 = weight(_text_:libraries in 2320) [ClassicSimilarity], result of:
          0.037798867 = score(doc=2320,freq=8.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.29036054 = fieldWeight in 2320, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
        0.027884906 = weight(_text_:studies in 2320) [ClassicSimilarity], result of:
          0.027884906 = score(doc=2320,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.17634688 = fieldWeight in 2320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
      0.25 = coord(2/8)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.
  19. Kurth, M.; Ruddy, D.; Rupp, N.: Repurposing MARC metadata : using digital project experience to develop a metadata management design (2004) 0.02
    0.016302295 = product of:
      0.06520918 = sum of:
        0.04910217 = weight(_text_:libraries in 4748) [ClassicSimilarity], result of:
          0.04910217 = score(doc=4748,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.3771894 = fieldWeight in 4748, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
              0.03221402 = score(doc=4748,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 4748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4748)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non-MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non-MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
    Source
    Library hi tech. 22(2004) no.2, S.144-152
  20. White, H.: Examining scientific vocabulary : mapping controlled vocabularies with free text keywords (2013) 0.02
    0.015998483 = product of:
      0.12798786 = sum of:
        0.12798786 = sum of:
          0.08503584 = weight(_text_:area in 1953) [ClassicSimilarity], result of:
            0.08503584 = score(doc=1953,freq=2.0), product of:
              0.1952553 = queryWeight, product of:
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.03962768 = queryNorm
              0.43551105 = fieldWeight in 1953, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.0625 = fieldNorm(doc=1953)
          0.042952027 = weight(_text_:22 in 1953) [ClassicSimilarity], result of:
            0.042952027 = score(doc=1953,freq=2.0), product of:
              0.13876937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03962768 = queryNorm
              0.30952093 = fieldWeight in 1953, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1953)
      0.125 = coord(1/8)
    
    Abstract
    Scientific repositories create a new environment for studying traditional information science issues. The interaction between indexing terms provided by users and controlled vocabularies continues to be an area of debate and study. This article reports and analyzes findings from a study that mapped the relationships between free text keywords and controlled vocabulary terms used in the sciences. Based on this study's findings recommendations are made about which vocabularies may be better to use in scientific data repositories.
    Date
    29. 5.2015 19:09:22

Authors

Years

Languages

Types