Search (70 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Metadaten"
  • × type_ss:"el"
  1. Understanding metadata (2004) 0.03
    0.025516426 = product of:
      0.059538327 = sum of:
        0.035108197 = weight(_text_:u in 2686) [ClassicSimilarity], result of:
          0.035108197 = score(doc=2686,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.28942272 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.004353387 = weight(_text_:a in 2686) [ClassicSimilarity], result of:
          0.004353387 = score(doc=2686,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.10191591 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.020076746 = product of:
          0.040153492 = sum of:
            0.040153492 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.040153492 = score(doc=2686,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  2. Apps, A.; MacIntyre, R.; Heery, R.; Patel, M.; Salokhe, G.: Zetoc : a Dublin Core Based Current Awareness Service (2002) 0.02
    0.019190311 = product of:
      0.06716609 = sum of:
        0.057740733 = weight(_text_:g in 1280) [ClassicSimilarity], result of:
          0.057740733 = score(doc=1280,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.4149775 = fieldWeight in 1280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.078125 = fieldNorm(doc=1280)
        0.00942536 = weight(_text_:a in 1280) [ClassicSimilarity], result of:
          0.00942536 = score(doc=1280,freq=6.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.22065444 = fieldWeight in 1280, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1280)
      0.2857143 = coord(2/7)
    
    Type
    a
  3. Baker, T.; Dekkers, M.; Heery, R.; Patel, M.; Salokhe, G.: What Terms Does Your Metadata Use? : Application Profiles as Machine-Understandable Narratives (2002) 0.02
    0.018052135 = product of:
      0.063182466 = sum of:
        0.057740733 = weight(_text_:g in 1279) [ClassicSimilarity], result of:
          0.057740733 = score(doc=1279,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.4149775 = fieldWeight in 1279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.078125 = fieldNorm(doc=1279)
        0.0054417336 = weight(_text_:a in 1279) [ClassicSimilarity], result of:
          0.0054417336 = score(doc=1279,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.12739488 = fieldWeight in 1279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1279)
      0.2857143 = coord(2/7)
    
    Type
    a
  4. McClelland, M.; McArthur, D.; Giersch, S.; Geisler, G.: Challenges for service providers when importing metadata in digital libraries (2002) 0.01
    0.013724841 = product of:
      0.04803694 = sum of:
        0.040418513 = weight(_text_:g in 565) [ClassicSimilarity], result of:
          0.040418513 = score(doc=565,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.29048425 = fieldWeight in 565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0546875 = fieldNorm(doc=565)
        0.0076184273 = weight(_text_:a in 565) [ClassicSimilarity], result of:
          0.0076184273 = score(doc=565,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.17835285 = fieldWeight in 565, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=565)
      0.2857143 = coord(2/7)
    
    Abstract
    Much of the usefulness of digital libraries lies in their ability to provide services for data from distributed repositories, and many research projects are investigating frameworks for interoperability. In this paper, we report on the experiences and lessons learned by iLumina after importing IMS metadata. iLumina utilizes the IMS metadata specification, which allows for a rich set of metadata (Dublin Core has a simpler metadata scheme that can be mapped onto a subset of the IMS metadata). Our experiences identify questions regarding intellectual property rights for metadata, protocols for enriched metadata, and tips for designing metadata services.
    Type
    a
  5. Mehler, A.; Waltinger, U.: Automatic enrichment of metadata (2009) 0.01
    0.012518564 = product of:
      0.043814972 = sum of:
        0.035108197 = weight(_text_:u in 4840) [ClassicSimilarity], result of:
          0.035108197 = score(doc=4840,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.28942272 = fieldWeight in 4840, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=4840)
        0.008706774 = weight(_text_:a in 4840) [ClassicSimilarity], result of:
          0.008706774 = score(doc=4840,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20383182 = fieldWeight in 4840, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4840)
      0.2857143 = coord(2/7)
    
    Abstract
    In this talk we present a retrieval model based on social ontologies. More specifically, we utilize the Wikipedia category system in order to perform semantic searches. That is, textual input is used to build queries by means of which documents are retrieved which do not necessarily contain any query term but are semantically related to the input text by virtue of their content. We present a desktop which utilizes this search facility in a web-based environment - the so called eHumanities Desktop.
  6. Stevens, G.: New metadata recipes for old cookbooks : creating and analyzing a digital collection using the HathiTrust Research Center Portal (2017) 0.01
    0.01044747 = product of:
      0.036566142 = sum of:
        0.028870367 = weight(_text_:g in 3897) [ClassicSimilarity], result of:
          0.028870367 = score(doc=3897,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.20748875 = fieldWeight in 3897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3897)
        0.007695774 = weight(_text_:a in 3897) [ClassicSimilarity], result of:
          0.007695774 = score(doc=3897,freq=16.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.18016359 = fieldWeight in 3897, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3897)
      0.2857143 = coord(2/7)
    
    Abstract
    The Early American Cookbooks digital project is a case study in analyzing collections as data using HathiTrust and the HathiTrust Research Center (HTRC) Portal. The purposes of the project are to create a freely available, searchable collection of full-text early American cookbooks within the HathiTrust Digital Library, to offer an overview of the scope and contents of the collection, and to analyze trends and patterns in the metadata and the full text of the collection. The digital project has two basic components: a collection of 1450 full-text cookbooks published in the United States between 1800 and 1920 and a website to present a guide to the collection and the results of the analysis. This article will focus on the workflow for analyzing the metadata and the full-text of the collection. The workflow will cover: 1) creating a searchable public collection of full-text titles within the HathiTrust Digital Library and uploading it to the HTRC Portal, 2) analyzing and visualizing legacy MARC data for the collection using MarcEdit, OpenRefine and Tableau, and 3) using the text analysis tools in the HTRC Portal to look for trends and patterns in the full text of the collection.
    Type
    a
  7. Bearman, D.; Miller, E.; Rust, G.; Trant, J.; Weibel, S.: ¬A common model to support interoperable metadata : progress report on reconciling metadata requirements from the Dublin Core and INDECS/DOI communities (1999) 0.01
    0.010305459 = product of:
      0.036069103 = sum of:
        0.028870367 = weight(_text_:g in 1249) [ClassicSimilarity], result of:
          0.028870367 = score(doc=1249,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.20748875 = fieldWeight in 1249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1249)
        0.0071987375 = weight(_text_:a in 1249) [ClassicSimilarity], result of:
          0.0071987375 = score(doc=1249,freq=14.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.1685276 = fieldWeight in 1249, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1249)
      0.2857143 = coord(2/7)
    
    Abstract
    The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. Recent meetings at the 6th Dublin Core Workshop in Washington DC sketched out common models for semantics (informed by the requirements articulated in the IFLA Functional Requirements for the Bibliographic Record) and conventions for knowledge representation (based on the Resource Description Framework under development by the W3C). Further development of detailed requirements is planned by both communities in the coming months with the aim of fully representing the metadata needs of each. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities. The present document represents a starting point identifying historical developments and common requirements of these perspectives on metadata and charts a path for harmonizing their respective conceptual models. It is hoped that collaboration over the coming year will result in agreed semantic and syntactic conventions that will support a high degree of interoperability among these communities, ideally expressed in a single data model and using common, standard tools.
    Type
    a
  8. Siripan, P.: Metadata and trends of cataloging in Thai libraries (1999) 0.01
    0.00780627 = product of:
      0.027321944 = sum of:
        0.021165324 = product of:
          0.04233065 = sum of:
            0.04233065 = weight(_text_:p in 4183) [ClassicSimilarity], result of:
              0.04233065 = score(doc=4183,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.31780142 = fieldWeight in 4183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4183)
          0.5 = coord(1/2)
        0.006156619 = weight(_text_:a in 4183) [ClassicSimilarity], result of:
          0.006156619 = score(doc=4183,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.14413087 = fieldWeight in 4183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4183)
      0.2857143 = coord(2/7)
    
    Abstract
    A status of cataloging in Thailand shows a movement toward the use of information technology. The international standards for cataloging are being used and modified to effectively organize the information resources. An expanded scope of resources needed cataloging now covers cataloging the Web resources. The paper mentions Thailand's participation in the international working group on the use of metadata for libraries
  9. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.01
    0.0076761255 = product of:
      0.026866438 = sum of:
        0.023096293 = weight(_text_:g in 3965) [ClassicSimilarity], result of:
          0.023096293 = score(doc=3965,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.165991 = fieldWeight in 3965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.003770144 = weight(_text_:a in 3965) [ClassicSimilarity], result of:
          0.003770144 = score(doc=3965,freq=6.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.088261776 = fieldWeight in 3965, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
  10. Riley, J.: Understanding metadata : what is metadata, and what is it for? (2017) 0.01
    0.0062693213 = product of:
      0.043885246 = sum of:
        0.043885246 = weight(_text_:u in 2005) [ClassicSimilarity], result of:
          0.043885246 = score(doc=2005,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.3617784 = fieldWeight in 2005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=2005)
      0.14285715 = coord(1/7)
    
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  11. Miller, P.: Metadata for the masses (1995) 0.01
    0.006047236 = product of:
      0.04233065 = sum of:
        0.04233065 = product of:
          0.0846613 = sum of:
            0.0846613 = weight(_text_:p in 5974) [ClassicSimilarity], result of:
              0.0846613 = score(doc=5974,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.63560283 = fieldWeight in 5974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.125 = fieldNorm(doc=5974)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  12. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.01
    0.005978315 = product of:
      0.020924103 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 3895) [ClassicSimilarity], result of:
              0.026456656 = score(doc=3895,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
        0.007695774 = weight(_text_:a in 3895) [ClassicSimilarity], result of:
          0.007695774 = score(doc=3895,freq=16.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.18016359 = fieldWeight in 3895, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.2857143 = coord(2/7)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
    Type
    a
  13. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.01
    0.005917305 = product of:
      0.020710567 = sum of:
        0.008162601 = weight(_text_:a in 4550) [ClassicSimilarity], result of:
          0.008162601 = score(doc=4550,freq=18.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.19109234 = fieldWeight in 4550, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.012547966 = product of:
          0.025095932 = sum of:
            0.025095932 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.025095932 = score(doc=4550,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
    Type
    a
  14. Baker, T.: ¬A grammar of Dublin Core (2000) 0.01
    0.0056493836 = product of:
      0.019772843 = sum of:
        0.0097344695 = weight(_text_:a in 1236) [ClassicSimilarity], result of:
          0.0097344695 = score(doc=1236,freq=40.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.22789092 = fieldWeight in 1236, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.010038373 = product of:
          0.020076746 = sum of:
            0.020076746 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.020076746 = score(doc=1236,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
    Type
    a
  15. DC-2013: International Conference on Dublin Core and Metadata Applications : Online Proceedings (2013) 0.00
    0.004990278 = product of:
      0.017465971 = sum of:
        0.010582662 = product of:
          0.021165324 = sum of:
            0.021165324 = weight(_text_:p in 1076) [ClassicSimilarity], result of:
              0.021165324 = score(doc=1076,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.15890071 = fieldWeight in 1076, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1076)
          0.5 = coord(1/2)
        0.0068833097 = weight(_text_:a in 1076) [ClassicSimilarity], result of:
          0.0068833097 = score(doc=1076,freq=20.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.16114321 = fieldWeight in 1076, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
      0.2857143 = coord(2/7)
    
    Abstract
    The collocated conferences for DC-2013 and iPRES-2013 in Lisbon attracted 392 participants from over 37 countries. In addition to the Tuesday through Thursday conference days comprised of peer-reviewed paper and special sessions, 223 participants attended pre-conference tutorials and 246 participated in post-conference workshops for the collocated events. The peer-reviewed papers and presentations are available on the conference website Presentation page (URLs above). In sum, it was a great conference. In addition to links to PDFs of papers, project reports and posters (and their associated presentations), the published proceedings include presentation PDFs for the following: KEYNOTES Darling, we need to talk - Gildas Illien TUTORIALS -- Ivan Herman: "Introduction to Linked Open Data (LOD)" -- Steven Miller: "Introduction to Ontology Concepts and Terminology" -- Kai Eckert: "Metadata Provenance" -- Daniel Garjio: "The W3C Provenance Ontology" SPECIAL SESSIONS -- "Application Profiles as an Alternative to OWL Ontologies" -- "Long-term Preservation and Governance of RDF Vocabularies (W3C Sponsored)" -- "Data Enrichment and Transformation in the LOD Context: Poor & Popular vs Rich & Lonely--Can't we achieve both?" -- "Why Schema.org?"
    Content
    FULL PAPERS Provenance and Annotations for Linked Data - Kai Eckert How Portable Are the Metadata Standards for Scientific Data? A Proposal for a Metadata Infrastructure - Jian Qin, Kai Li Lessons Learned in Implementing the Extended Date/Time Format in a Large Digital Library - Hannah Tarver, Mark Phillips Towards the Representation of Chinese Traditional Music: A State of the Art Review of Music Metadata Standards - Mi Tian, György Fazekas, Dawn Black, Mark Sandler Maps and Gaps: Strategies for Vocabulary Design and Development - Diane Ileana Hillmann, Gordon Dunsire, Jon Phipps A Method for the Development of Dublin Core Application Profiles (Me4DCAP V0.1): Aescription - Mariana Curado Malta, Ana Alice Baptista Find and Combine Vocabularies to Design Metadata Application Profiles using Schema Registries and LOD Resources - Tsunagu Honma, Mitsuharu Nagamori, Shigeo Sugimoto Achieving Interoperability between the CARARE Schema for Monuments and Sites and the Europeana Data Model - Antoine Isaac, Valentine Charles, Kate Fernie, Costis Dallas, Dimitris Gavrilis, Stavros Angelis With a Focused Intent: Evolution of DCMI as a Research Community - Jihee Beak, Richard P. Smiraglia Metadata Capital in a Data Repository - Jane Greenberg, Shea Swauger, Elena Feinstein DC Metadata is Alive and Well - A New Standard for Education - Liddy Nevile Representation of the UNIMARC Bibliographic Data Format in Resource Description Framework - Gordon Dunsire, Mirna Willer, Predrag Perozic
  16. Kaparova, N.; Shwartsman, M.: Creation of the electronic resources metadatabase in russia : problems and prospects (2000) 0.00
    0.004535427 = product of:
      0.031747986 = sum of:
        0.031747986 = product of:
          0.06349597 = sum of:
            0.06349597 = weight(_text_:p in 5405) [ClassicSimilarity], result of:
              0.06349597 = score(doc=5405,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.47670212 = fieldWeight in 5405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5405)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Type
    p
  17. Caplan, P.: International metadata initiatives : lessons in bibliographic control (2000) 0.00
    0.004535427 = product of:
      0.031747986 = sum of:
        0.031747986 = product of:
          0.06349597 = sum of:
            0.06349597 = weight(_text_:p in 6804) [ClassicSimilarity], result of:
              0.06349597 = score(doc=6804,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.47670212 = fieldWeight in 6804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6804)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  18. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.00
    0.00430216 = product of:
      0.030115116 = sum of:
        0.030115116 = product of:
          0.060230233 = sum of:
            0.060230233 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.060230233 = score(doc=6048,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.2007 15:41:14
  19. Weibel, S.L.: Border crossings : reflections on a decade of metadata consensus building (2005) 0.00
    0.0014014608 = product of:
      0.009810225 = sum of:
        0.009810225 = weight(_text_:a in 1187) [ClassicSimilarity], result of:
          0.009810225 = score(doc=1187,freq=26.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.22966442 = fieldWeight in 1187, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1187)
      0.14285715 = coord(1/7)
    
    Abstract
    In June of this year, I performed my final official duties as part of the Dublin Core Metadata Initiative management team. It is a happy irony to affix a seal on that service in this journal, as both D-Lib Magazine and the Dublin Core celebrate their tenth anniversaries. This essay is a personal reflection on some of the achievements and lessons of that decade. The OCLC-NCSA Metadata Workshop took place in March of 1995, and as we tried to understand what it meant and who would care, D-Lib magazine came into being and offered a natural venue for sharing our work. I recall a certain skepticism when Bill Arms said "We want D-Lib to be the first place people look for the latest developments in digital library research." These were the early days in the evolution of electronic publishing, and the goal was ambitious. By any measure, a decade of high-quality electronic publishing is an auspicious accomplishment, and D-Lib (and its host, CNRI) deserve congratulations for having achieved their goal. I am grateful to have been a contributor. That first DC workshop led to further workshops, a community, a variety of standards in several countries, an ISO standard, a conference series, and an international consortium. Looking back on this evolution is both satisfying and wistful. While I am pleased that the achievements are substantial, the unmet challenges also provide a rich till in which to cultivate insights on the development of digital infrastructure.
    Type
    a
  20. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.00
    0.0013329472 = product of:
      0.009330629 = sum of:
        0.009330629 = weight(_text_:a in 1155) [ClassicSimilarity], result of:
          0.009330629 = score(doc=1155,freq=12.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.21843673 = fieldWeight in 1155, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
    Type
    a

Years

Types

  • a 48
  • n 2
  • m 1
  • p 1
  • s 1
  • More… Less…