Search (4 results, page 1 of 1)

  • × theme_ss:"Formalerschließung"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Behrens, R.; Aliverti, C.; Schaffner, V.: RDA in Germany, Austria and German-speaking Switzerland : a new standard not only for libraries (2016) 0.01
    0.0112056285 = product of:
      0.033616886 = sum of:
        0.033616886 = product of:
          0.06723377 = sum of:
            0.06723377 = weight(_text_:reports in 2954) [ClassicSimilarity], result of:
              0.06723377 = score(doc=2954,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.29857272 = fieldWeight in 2954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2954)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The library community in Germany, Austria and German-speaking Switzerland achieved a common goal at the end of 2015. After more than two years of intensive preparation, the international standard RDA was implemented and the practical work has now started. The article describes the project in terms of the political and organizational situation in the three countries, and points out the objectives which have been achieved as well as the work which is still outstanding. An overview is given of the initial efforts to align special materials with RDA in the German-speaking countries, and the tasks associated with the specific requirements arising from the multilingual nature of Switzerland are described. Furthermore, the article reports on the current strategic developments in the international RDA committees like the RDA Steering Committee (RSC) and the European RDA Interest Group (EURIG).
  2. Harlow, C.: Data munging tools in Preparation for RDF : Catmandu and LODRefine (2015) 0.01
    0.009338023 = product of:
      0.02801407 = sum of:
        0.02801407 = product of:
          0.05602814 = sum of:
            0.05602814 = weight(_text_:reports in 2277) [ClassicSimilarity], result of:
              0.05602814 = score(doc=2277,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.24881059 = fieldWeight in 2277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2277)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Data munging, or the work of remediating, enhancing and transforming library datasets for new or improved uses, has become more important and staff-inclusive in many library technology discussions and projects. Many times we know how we want our data to look, as well as how we want our data to act in discovery interfaces or when exposed, but we are uncertain how to make the data we have into the data we want. This article introduces and compares two library data munging tools that can help: LODRefine (OpenRefine with the DERI RDF Extension) and Catmandu. The strengths and best practices of each tool are discussed in the context of metadata munging use cases for an institution's metadata migration workflow. There is a focus on Linked Open Data modeling and transformation applications of each tool, in particular how metadataists, catalogers, and programmers can create metadata quality reports, enhance existing data with LOD sets, and transform that data to a RDF model. Integration of these tools with other systems and projects, the use of domain specific transformation languages, and the expansion of vocabulary reconciliation services are mentioned.
  3. Eversberg, B.: Zum Thema "Migration" - Beispiel USA (2018) 0.01
    0.009338023 = product of:
      0.02801407 = sum of:
        0.02801407 = product of:
          0.05602814 = sum of:
            0.05602814 = weight(_text_:reports in 4386) [ClassicSimilarity], result of:
              0.05602814 = score(doc=4386,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.24881059 = fieldWeight in 4386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4386)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Wir leiden an einer bedauerlichen Knappheit von Information über die Marktlage und Erfahrungen bei den bibliothekarischen DV-Systemen. Das ist jenseits des "Teiches" anders: Jedes Jahr veröffentlicht der Konsultant Marshall Breeding eine umfangreiche Marktübersicht und hält auf seiner Website eine Menge andere Berichte und interessante Übersichten bereit, z.B. "Migration Reports": https://librarytechnology.org/Library Technology Guides. Da sieht man in Tabellen, z.B. auch für Koha, wieviele Anwender von welchen Systemen insgesamt schon zu Koha gewandert sind, die Gesamtzahl ist 3.547! Zu Alma dagegen nur 1151, zu WMS 464, zu Aleph 1036. ("allegro-C" ist nicht dabei, aber es gab ja auch nie irgendwelche Anwender in USA, außer Goethe-Institute, aber die sind wohl nicht erfaßt.) Breedings neueste Marktübersicht für 2018 ist im Journal "American Libraries" veröffentlicht: https://americanlibrariesmagazine.org/2018/05/01/library-systems-report-2018/
  4. Delsey, T.: ¬The Making of RDA (2016) 0.01
    0.0067738965 = product of:
      0.02032169 = sum of:
        0.02032169 = product of:
          0.04064338 = sum of:
            0.04064338 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.04064338 = score(doc=2946,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    17. 5.2016 19:22:40