Search (80 results, page 1 of 4)

  • × theme_ss:"Metadaten"
  • × type_ss:"el"
  1. Understanding metadata (2004) 0.02
    0.01832427 = product of:
      0.03664854 = sum of:
        0.03664854 = product of:
          0.054972813 = sum of:
            0.0053771 = weight(_text_:a in 2686) [ClassicSimilarity], result of:
              0.0053771 = score(doc=2686,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.10191591 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
            0.049595714 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.049595714 = score(doc=2686,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  2. Heery, R.; Patel, M.: Application profiles : mixing and matching metadata schemas (2000) 0.02
    0.017745905 = product of:
      0.03549181 = sum of:
        0.03549181 = product of:
          0.053237714 = sum of:
            0.043827787 = weight(_text_:m in 3915) [ClassicSimilarity], result of:
              0.043827787 = score(doc=3915,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.38491225 = fieldWeight in 3915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3915)
            0.009409925 = weight(_text_:a in 3915) [ClassicSimilarity], result of:
              0.009409925 = score(doc=3915,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.17835285 = fieldWeight in 3915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3915)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  3. Kuzma, M.: Are you able to find the maps you need? (2019) 0.02
    0.017745905 = product of:
      0.03549181 = sum of:
        0.03549181 = product of:
          0.053237714 = sum of:
            0.043827787 = weight(_text_:m in 4569) [ClassicSimilarity], result of:
              0.043827787 = score(doc=4569,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.38491225 = fieldWeight in 4569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4569)
            0.009409925 = weight(_text_:a in 4569) [ClassicSimilarity], result of:
              0.009409925 = score(doc=4569,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.17835285 = fieldWeight in 4569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4569)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  4. Baker, T.; Dekkers, M.; Heery, R.; Patel, M.; Salokhe, G.: What Terms Does Your Metadata Use? : Application Profiles as Machine-Understandable Narratives (2002) 0.02
    0.016998043 = product of:
      0.033996087 = sum of:
        0.033996087 = product of:
          0.050994128 = sum of:
            0.044272754 = weight(_text_:m in 1279) [ClassicSimilarity], result of:
              0.044272754 = score(doc=1279,freq=4.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.3888201 = fieldWeight in 1279, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1279)
            0.0067213746 = weight(_text_:a in 1279) [ClassicSimilarity], result of:
              0.0067213746 = score(doc=1279,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.12739488 = fieldWeight in 1279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1279)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  5. Apps, A.; MacIntyre, R.; Heery, R.; Patel, M.; Salokhe, G.: Zetoc : a Dublin Core Based Current Awareness Service (2002) 0.01
    0.014315776 = product of:
      0.028631551 = sum of:
        0.028631551 = product of:
          0.042947326 = sum of:
            0.031305563 = weight(_text_:m in 1280) [ClassicSimilarity], result of:
              0.031305563 = score(doc=1280,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.27493733 = fieldWeight in 1280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1280)
            0.011641764 = weight(_text_:a in 1280) [ClassicSimilarity], result of:
              0.011641764 = score(doc=1280,freq=6.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.22065444 = fieldWeight in 1280, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1280)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  6. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.01
    0.014300021 = product of:
      0.028600043 = sum of:
        0.028600043 = product of:
          0.042900063 = sum of:
            0.005703276 = weight(_text_:a in 266) [ClassicSimilarity], result of:
              0.005703276 = score(doc=266,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.10809815 = fieldWeight in 266, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=266)
            0.037196785 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.037196785 = score(doc=266,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.23214069 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=266)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
    Location
    A
    Type
    a
  7. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.01
    0.013693128 = product of:
      0.027386256 = sum of:
        0.027386256 = product of:
          0.041079383 = sum of:
            0.010082063 = weight(_text_:a in 4550) [ClassicSimilarity], result of:
              0.010082063 = score(doc=4550,freq=18.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19109234 = fieldWeight in 4550, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
            0.030997321 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.030997321 = score(doc=4550,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
    Type
    a
  8. Greenberg, J.; Pattuelli, M.; Parsia, B.; Robertson, W.: Author-generated Dublin Core Metadata for Web Resources : A Baseline Study in an Organization (2002) 0.01
    0.013603675 = product of:
      0.02720735 = sum of:
        0.02720735 = product of:
          0.040811025 = sum of:
            0.031305563 = weight(_text_:m in 1281) [ClassicSimilarity], result of:
              0.031305563 = score(doc=1281,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.27493733 = fieldWeight in 1281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1281)
            0.00950546 = weight(_text_:a in 1281) [ClassicSimilarity], result of:
              0.00950546 = score(doc=1281,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.18016359 = fieldWeight in 1281, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1281)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  9. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.012398928 = product of:
      0.024797857 = sum of:
        0.024797857 = product of:
          0.07439357 = sum of:
            0.07439357 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07439357 = score(doc=6048,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  10. Baker, T.: ¬A grammar of Dublin Core (2000) 0.01
    0.012273806 = product of:
      0.024547612 = sum of:
        0.024547612 = product of:
          0.036821418 = sum of:
            0.012023562 = weight(_text_:a in 1236) [ClassicSimilarity], result of:
              0.012023562 = score(doc=1236,freq=40.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.22789092 = fieldWeight in 1236, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
            0.024797857 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.024797857 = score(doc=1236,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
    Type
    a
  11. McClelland, M.; McArthur, D.; Giersch, S.; Geisler, G.: Challenges for service providers when importing metadata in digital libraries (2002) 0.01
    0.010441273 = product of:
      0.020882547 = sum of:
        0.020882547 = product of:
          0.03132382 = sum of:
            0.021913894 = weight(_text_:m in 565) [ClassicSimilarity], result of:
              0.021913894 = score(doc=565,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=565)
            0.009409925 = weight(_text_:a in 565) [ClassicSimilarity], result of:
              0.009409925 = score(doc=565,freq=8.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.17835285 = fieldWeight in 565, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=565)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Much of the usefulness of digital libraries lies in their ability to provide services for data from distributed repositories, and many research projects are investigating frameworks for interoperability. In this paper, we report on the experiences and lessons learned by iLumina after importing IMS metadata. iLumina utilizes the IMS metadata specification, which allows for a rich set of metadata (Dublin Core has a simpler metadata scheme that can be mapped onto a subset of the IMS metadata). Our experiences identify questions regarding intellectual property rights for metadata, protocols for enriched metadata, and tips for designing metadata services.
    Type
    a
  12. Lynch, J.D.; Gibson, J.; Han, M.-J.: Analyzing and normalizing type metadata for a large aggregated digital library (2020) 0.01
    0.010441273 = product of:
      0.020882547 = sum of:
        0.020882547 = product of:
          0.03132382 = sum of:
            0.021913894 = weight(_text_:m in 5720) [ClassicSimilarity], result of:
              0.021913894 = score(doc=5720,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 5720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5720)
            0.009409925 = weight(_text_:a in 5720) [ClassicSimilarity], result of:
              0.009409925 = score(doc=5720,freq=8.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.17835285 = fieldWeight in 5720, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5720)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The Illinois Digital Heritage Hub (IDHH) gathers and enhances metadata from contributing institutions around the state of Illinois and provides this metadata to th Digital Public Library of America (DPLA) for greater access. The IDHH helps contributors shape their metadata to the standards recommended and required by the DPLA in part by analyzing and enhancing aggregated metadata. In late 2018, the IDHH undertook a project to address a particularly problematic field, Type metadata. This paper walks through the project, detailing the process of gathering and analyzing metadata using the DPLA API and OpenRefine, data remediation through XSL transformations in conjunction with local improvements by contributing institutions, and the DPLA ingestion system's quality controls.
    Type
    a
  13. Suranofsky, M.; McColl, L.: a Google sheets add-on that uses the WorldCat search API : MatchMarc (2019) 0.01
    0.009553901 = product of:
      0.019107802 = sum of:
        0.019107802 = product of:
          0.028661702 = sum of:
            0.018783338 = weight(_text_:m in 5442) [ClassicSimilarity], result of:
              0.018783338 = score(doc=5442,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1649624 = fieldWeight in 5442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5442)
            0.009878363 = weight(_text_:a in 5442) [ClassicSimilarity], result of:
              0.009878363 = score(doc=5442,freq=12.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.18723148 = fieldWeight in 5442, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5442)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Lehigh University Libraries has developed a new tool for querying WorldCat using the WorldCat Search API. The tool is a Google Sheet Add-on and is available now via the Google Sheets Add-ons menu under the name "MatchMarc." The add-on is easily customizable, with no knowledge of coding needed. The tool will return a single "best" OCLC record number, and its bibliographic information for a given ISBN or LCCN, allowing the user to set up and define "best." Because all of the information, the input, the criteria, and the results exist in the Google Sheets environment, efficient workflows can be developed from this flexible starting point. This article will discuss the development of the add-on, how it works, and future plans for development.
    Type
    a
  14. Dekkers, M.; Weibel, S.L.: State of the Dublin Core Metadata Initiative April 2003 (2003) 0.01
    0.009522572 = product of:
      0.019045144 = sum of:
        0.019045144 = product of:
          0.028567716 = sum of:
            0.021913894 = weight(_text_:m in 2795) [ClassicSimilarity], result of:
              0.021913894 = score(doc=2795,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 2795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2795)
            0.0066538225 = weight(_text_:a in 2795) [ClassicSimilarity], result of:
              0.0066538225 = score(doc=2795,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.12611452 = fieldWeight in 2795, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2795)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The Dublin Core Metadata Initiative continues to grow in participation and recognition as the predominant resource discovery metadata standard on the Internet. With its approval as ISO 15836, DC is firmly established as a foundation block of modular, interoperable metadata for distributed resources. This report summarizes developments in DCMI over the past year, including the annual conference, progress of working groups, new developments in encoding methods, and advances in documentation and dissemination. New developments in broadening the community to commercial users of metadata are discussed, and plans for an international network of national affiliates are described.
    Type
    a
  15. Dekkers, M.; Weibel, S.: Dublin Core Metadata Initiative Progress Report and Workplan for 2002 (2002) 0.01
    0.009522572 = product of:
      0.019045144 = sum of:
        0.019045144 = product of:
          0.028567716 = sum of:
            0.021913894 = weight(_text_:m in 1204) [ClassicSimilarity], result of:
              0.021913894 = score(doc=1204,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 1204, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1204)
            0.0066538225 = weight(_text_:a in 1204) [ClassicSimilarity], result of:
              0.0066538225 = score(doc=1204,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.12611452 = fieldWeight in 1204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1204)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The Dublin Core Metadata Initiative (DCMI) progressed on many fronts in 2001, including launching important organizational changes, achievement of major objectives identified in the previous year, completion of ANSI standardization, and increased community participation and uptake of the standard. The annual workshop, held in Asia for the first time this past October, was broadened in scope to include a tutorial track and conference. This report summarizes the accomplishments and changes that have taken place in the Initiative during the past year and outlines the workplan for the coming year.
    Type
    a
  16. Dogtas, G.; Ibitz, M.-P.; Jonitz, F.; Kocher, V.; Poyer, A.,; Stapf, L.: Kritik an rassifizierenden und diskriminierenden Titeln und Metadaten : Praxisorientierte Lösungsansätze (2022) 0.01
    0.009522572 = product of:
      0.019045144 = sum of:
        0.019045144 = product of:
          0.028567716 = sum of:
            0.021913894 = weight(_text_:m in 1828) [ClassicSimilarity], result of:
              0.021913894 = score(doc=1828,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 1828, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1828)
            0.0066538225 = weight(_text_:a in 1828) [ClassicSimilarity], result of:
              0.0066538225 = score(doc=1828,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.12611452 = fieldWeight in 1828, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1828)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  17. Baca, M.; O'Keefe, E.: Sharing standards and expertise in the early 21st century : Moving toward a collaborative, "cross-community" model for metadata creation (2008) 0.01
    0.009267004 = product of:
      0.018534008 = sum of:
        0.018534008 = product of:
          0.02780101 = sum of:
            0.018783338 = weight(_text_:m in 2321) [ClassicSimilarity], result of:
              0.018783338 = score(doc=2321,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1649624 = fieldWeight in 2321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2321)
            0.009017671 = weight(_text_:a in 2321) [ClassicSimilarity], result of:
              0.009017671 = score(doc=2321,freq=10.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1709182 = fieldWeight in 2321, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2321)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper provides a brief overview of the evolving descriptive metadata landscape, one phenomenon of which can be characterized as "cross-community" metadata as manifested in records that are the result of a combination of carefully considered data value and data content standards. he online catalog of the Morgan Library & Museum provides a real-life illustration of how diverse data content standards and vocabulary tools can be integrated within the classic data structure/technical interchange format of MARC21 to better describe unique, museum-type objects, and to provide better end-user access and understanding. The Morgan experience also shows the value of developing a collaborative model for metadata creation that combines the subject expertise of curators and scholars with the cataloging expertise and knowledge of standards possessed by librarians.
  18. Buckland, M.; Chen, A.; Chen, H.M.; Kim, Y.; Lam, B.; Larson, R.; Norgard, B.; Purat, J.; Gey, F.: Mapping entry vocabulary to unfamiliar metadata vocabularies (1999) 0.01
    0.008589465 = product of:
      0.01717893 = sum of:
        0.01717893 = product of:
          0.025768396 = sum of:
            0.018783338 = weight(_text_:m in 1238) [ClassicSimilarity], result of:
              0.018783338 = score(doc=1238,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1649624 = fieldWeight in 1238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1238)
            0.006985058 = weight(_text_:a in 1238) [ClassicSimilarity], result of:
              0.006985058 = score(doc=1238,freq=6.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.13239266 = fieldWeight in 1238, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1238)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The emerging network environment brings access to an increasing population of heterogeneous repositories. Inevitably, these, have quite diverse metadata vocabularies (categorization codes, classification numbers, index and thesaurus terms). So, necessarily, the number of metadata vocabularies that are accessible but unfamiliar for any individual searcher is increasing steeply. When an unfamiliar metadata vocabulary is encountered, how is a searcher to know which codes or terms will lead to what is wanted? This paper reports work at the University of California, Berkeley, on the design and development of English language indexes to metadata vocabularies. Further details and the current status of the work can be found at the project website http://www.sims.berkeley.edu/research/metadata/
    Type
    a
  19. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.01
    0.008386081 = product of:
      0.016772162 = sum of:
        0.016772162 = product of:
          0.025158241 = sum of:
            0.015652781 = weight(_text_:m in 3895) [ClassicSimilarity], result of:
              0.015652781 = score(doc=3895,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.13746867 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3895)
            0.00950546 = weight(_text_:a in 3895) [ClassicSimilarity], result of:
              0.00950546 = score(doc=3895,freq=16.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.18016359 = fieldWeight in 3895, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3895)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
    Type
    a
  20. Ruhl, M.: Do we need metadata? : an on-line survey in German archives (2012) 0.01
    0.007605388 = product of:
      0.015210776 = sum of:
        0.015210776 = product of:
          0.022816163 = sum of:
            0.018783338 = weight(_text_:m in 471) [ClassicSimilarity], result of:
              0.018783338 = score(doc=471,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1649624 = fieldWeight in 471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=471)
            0.004032825 = weight(_text_:a in 471) [ClassicSimilarity], result of:
              0.004032825 = score(doc=471,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.07643694 = fieldWeight in 471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=471)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al

Years

Languages

  • e 69
  • d 11

Types

  • a 56
  • n 2
  • m 1
  • p 1
  • s 1
  • More… Less…