Search (16 results, page 1 of 1)

  • × theme_ss:"Metadaten"
  • × type_ss:"el"
  1. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.00
    0.0042065145 = product of:
      0.0294456 = sum of:
        0.0294456 = product of:
          0.0588912 = sum of:
            0.0588912 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.0588912 = score(doc=6048,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.2007 15:41:14
  2. Mehler, A.; Waltinger, U.: Automatic enrichment of metadata (2009) 0.00
    0.0034888082 = product of:
      0.024421657 = sum of:
        0.024421657 = product of:
          0.06105414 = sum of:
            0.029295133 = weight(_text_:retrieval in 4840) [ClassicSimilarity], result of:
              0.029295133 = score(doc=4840,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.26736724 = fieldWeight in 4840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4840)
            0.03175901 = weight(_text_:system in 4840) [ClassicSimilarity], result of:
              0.03175901 = score(doc=4840,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.27838376 = fieldWeight in 4840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4840)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In this talk we present a retrieval model based on social ontologies. More specifically, we utilize the Wikipedia category system in order to perform semantic searches. That is, textual input is used to build queries by means of which documents are retrieved which do not necessarily contain any query term but are semantically related to the input text by virtue of their content. We present a desktop which utilizes this search facility in a web-based environment - the so called eHumanities Desktop.
  3. Understanding metadata (2004) 0.00
    0.0028043431 = product of:
      0.0196304 = sum of:
        0.0196304 = product of:
          0.0392608 = sum of:
            0.0392608 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.0392608 = score(doc=2686,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    10. 9.2004 10:22:40
  4. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.00
    0.0021032572 = product of:
      0.0147228 = sum of:
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.0294456 = score(doc=266,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 5.2021 12:43:05
  5. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.00
    0.0017527144 = product of:
      0.0122690005 = sum of:
        0.0122690005 = product of:
          0.024538001 = sum of:
            0.024538001 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.024538001 = score(doc=4550,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    10.11.2018 16:27:22
  6. Krause, J.; Schwänzl, R.: Inhaltserschließung : Formale Beschreibung, Identifikation und Retrieval, MetaDaten, Vernetzung (1998) 0.00
    0.0014647568 = product of:
      0.010253297 = sum of:
        0.010253297 = product of:
          0.051266484 = sum of:
            0.051266484 = weight(_text_:retrieval in 4462) [ClassicSimilarity], result of:
              0.051266484 = score(doc=4462,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46789268 = fieldWeight in 4462, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4462)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
  7. CARMEN : Content Analysis, Retrieval und Metadata: Effective Networking (1999) 0.00
    0.0014647568 = product of:
      0.010253297 = sum of:
        0.010253297 = product of:
          0.051266484 = sum of:
            0.051266484 = weight(_text_:retrieval in 5748) [ClassicSimilarity], result of:
              0.051266484 = score(doc=5748,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46789268 = fieldWeight in 5748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5748)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
  8. Baker, T.: ¬A grammar of Dublin Core (2000) 0.00
    0.0014021716 = product of:
      0.0098152 = sum of:
        0.0098152 = product of:
          0.0196304 = sum of:
            0.0196304 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.0196304 = score(doc=1236,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    26.12.2011 14:01:22
  9. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.00
    9.624433E-4 = product of:
      0.0067371028 = sum of:
        0.0067371028 = product of:
          0.033685513 = sum of:
            0.033685513 = weight(_text_:system in 3896) [ClassicSimilarity], result of:
              0.033685513 = score(doc=3896,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.29527056 = fieldWeight in 3896, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3896)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  10. Blanchi, C.; Petrone, J.: Distributed interoperable metadata registry (2001) 0.00
    7.9397525E-4 = product of:
      0.0055578267 = sum of:
        0.0055578267 = product of:
          0.027789133 = sum of:
            0.027789133 = weight(_text_:system in 1228) [ClassicSimilarity], result of:
              0.027789133 = score(doc=1228,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 1228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1228)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Interoperability between digital libraries depends on effective sharing of metadata. Successful sharing of metadata requires common standards for metadata exchange. Previous efforts have focused on either defining a single metadata standard, such as Dublin Core, or building digital library middleware, such as Z39.50 or Stanford's Digital Library Interoperability Protocol. In this article, we propose a distributed architecture for managing metadata and metadata schema. Instead of normalizing all metadata and schema to a single format, we have focused on building a middleware framework that tolerates heterogeneity. By providing facilities for typing and dynamic conversion of metadata, our system permits continual introduction of new forms of metadata with minimal impact on compatibility.
  11. Golub, K.; Moon, J.; Nielsen, M.L.; Tudhope, D.: EnTag: Enhanced Tagging for Discovery (2008) 0.00
    7.323784E-4 = product of:
      0.0051266486 = sum of:
        0.0051266486 = product of:
          0.025633242 = sum of:
            0.025633242 = weight(_text_:retrieval in 2294) [ClassicSimilarity], result of:
              0.025633242 = score(doc=2294,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23394634 = fieldWeight in 2294, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2294)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose: Investigate the combination of controlled and folksonomy approaches to support resource discovery in repositories and digital collections. Aim: Investigate whether use of an established controlled vocabulary can help improve social tagging for better resource discovery. Objectives: (1) Investigate indexing aspects when using only social tagging versus when using social tagging with suggestions from a controlled vocabulary; (2) Investigate above in two different contexts: tagging by readers and tagging by authors; (3) Investigate influence of only social tagging versus social tagging with a controlled vocabulary on retrieval. - Vgl.: http://www.ukoln.ac.uk/projects/enhanced-tagging/.
  12. Farney, T.: using Google Tag Manager to share code : Designing shareable tags (2019) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 5443) [ClassicSimilarity], result of:
              0.01984938 = score(doc=5443,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 5443, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5443)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Sharing code between libraries is not a new phenomenon and neither is Google Tag Manager (GTM). GTM launched in 2012 as a JavaScript and HTML manager with the intent of easing the implementation of different analytics trackers and marketing scripts on a website. However, it can be used to load other code using its tag system onto a website. It's a simple process to export and import tags facilitating the code sharing process without requiring a high degree of coding experience. The entire process involves creating the script tag in GTM, exporting the GTM content into a sharable export file for someone else to import into their library's GTM container, and finally publishing that imported file to push the code to the website it was designed for. This case study provides an example of designing and sharing a GTM container loaded with advanced Google Analytics configurations such as event tracking and custom dimensions for other libraries using the Summon discovery service. It also discusses processes for designing GTM tags for export, best practices on importing and testing GTM content created by other libraries and concludes with evaluating the pros and cons of encouraging GTM use.
  13. Hodges, D.W.; Schlottmann, K.: better archival migration outcomes with Python and the Google Sheets API : Reporting from the archives (2019) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 5444) [ClassicSimilarity], result of:
              0.01984938 = score(doc=5444,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 5444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5444)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Columbia University Libraries recently embarked on a multi-phase project to migrate nearly 4,000 records describing over 70,000 linear feet of archival material from disparate sources and formats into ArchivesSpace. This paper discusses tools and methods brought to bear in Phase 2 of this project, which required us to look closely at how to integrate a large number of legacy finding aids into the new system and merge descriptive data that had diverged in myriad ways. Using Python, XSLT, and a widely available if underappreciated resource-the Google Sheets API-archival and technical library staff devised ways to efficiently report data from different sources, and present it in an accessible, user-friendly way,. Responses were then fed back into automated data remediation processes to keep the migration project on track and minimize manual intervention. The scripts and processes developed proved very effective, and moreover, show promise well beyond the ArchivesSpace migration. This paper describes the Python/XSLT/Sheets API processes developed and how they opened a path to move beyond CSV-based reporting with flexible, ad-hoc data interfaces easily adaptable to meet a variety of purposes.
  14. Patton, M.; Reynolds, D.; Choudhury, G.S.; DiLauro, T.: Toward a metadata generation framework : a case study at Johns Hopkins University (2004) 0.00
    4.5370017E-4 = product of:
      0.003175901 = sum of:
        0.003175901 = product of:
          0.015879504 = sum of:
            0.015879504 = weight(_text_:system in 1192) [ClassicSimilarity], result of:
              0.015879504 = score(doc=1192,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 1192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1192)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In the June 2003 issue of D-Lib Magazine, Kenney et al. (2003) discuss a comparative study between Cornell's email reference staff and Google's Answers service. This interesting study provided insights on the potential impact of "computing and simple algorithms combined with human intelligence" for library reference services. As mentioned in the Kenney et al. article, Bill Arms (2000) had discussed the possibilities of automated digital libraries in an even earlier D-Lib article. Arms discusses not only automating reference services, but also another library function that seems to inspire lively debates about automation-metadata creation. While intended to illuminate, these debates sometimes generate more heat than light. In an effort to explore the potential for automating metadata generation, the Digital Knowledge Center (DKC) of the Sheridan Libraries at The Johns Hopkins University developed and tested an automated name authority control (ANAC) tool. ANAC represents a component of a digital workflow management system developed in connection with the digital Lester S. Levy Collection of Sheet Music. The evaluation of ANAC followed the spirit of the Kenney et al. study that was, as they stated, "more exploratory than scientific." These ANAC evaluation results are shared with the hope of fostering constructive dialogue and discussions about the potential for semi-automated techniques or frameworks for library functions and services such as metadata creation. The DKC's research agenda emphasizes the development of tools that combine automated processes and human intervention, with the overall goal of involving humans at higher levels of analysis and decision-making. Others have looked at issues regarding the automated generation of metadata. A session at the 2003 Joint Conference on Digital Libraries was devoted to automatic metadata creation, and a session at the 2004 conference addressed automated name disambiguation. Commercial vendors such as OCLC, Marcive, and LTI have long used automated techniques for matching names to Library of Congress authority records. We began developing ANAC as a component of a larger suite of open source tools to support workflow management for digital projects. This article describes the goals for the ANAC tool, provides an overview of the metadata records used for testing, describes the architecture for ANAC, and concludes with discussions of the methodology and evaluation of the experiment comparing human cataloging and ANAC-generated results.
  15. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.00
    4.5370017E-4 = product of:
      0.003175901 = sum of:
        0.003175901 = product of:
          0.015879504 = sum of:
            0.015879504 = weight(_text_:system in 3965) [ClassicSimilarity], result of:
              0.015879504 = score(doc=3965,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 3965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
  16. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.00
    2.8356258E-4 = product of:
      0.001984938 = sum of:
        0.001984938 = product of:
          0.00992469 = sum of:
            0.00992469 = weight(_text_:system in 1216) [ClassicSimilarity], result of:
              0.00992469 = score(doc=1216,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.08699492 = fieldWeight in 1216, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1216)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Reality is messy. Individuals perceive or define objects differently. Objects may change over time, morphing into new versions of their former selves or into things altogether different. A book can give rise to a translation, derivation, or edition, and these resulting objects are related in complex ways to each other and to the people and contexts in which they were created or transformed. Providing a normalized view of such a messy reality is a precondition for managing information. From the first library catalogs, through Melvil Dewey's Decimal Classification system in the nineteenth century, to today's MARC encoding of AACR2 cataloging rules, libraries have epitomized the process of what David Levy calls "order making", whereby catalogers impose a veneer of regularity on the natural disorder of the artifacts they encounter. The pre-digital library within which the Catalog and its standards evolved was relatively self-contained and controlled. Creating and maintaining catalog records was, and still is, the task of professionals. Today's Web, in contrast, has brought together a diversity of information management communities, with a variety of order-making standards, into what Stuart Weibel has called the Internet Commons. The sheer scale of this context has motivated a search for new ways to describe and index information. Second-generation search engines such as Google can yield astonishingly good search results, while tools such as ResearchIndex for automatic citation indexing and techniques for inferring "Web communities" from constellations of hyperlinks promise even better methods for focusing queries on information from authoritative sources. Such "automated digital libraries," according to Bill Arms, promise to radically reduce the cost of managing information. Alongside the development of such automated methods, there is increasing interest in metadata as a means of imposing pre-defined order on Web content. While the size and changeability of the Web makes professional cataloging impractical, a minimal amount of information ordering, such as that represented by the Dublin Core (DC), may vastly improve the quality of an automatic index at low cost; indeed, recent work suggests that some types of simple description may be generated with little or no human intervention.