Search (11 results, page 1 of 1)

  • × theme_ss:"Formalerschließung"
  • × type_ss:"el"
  1. Eversberg, B.: KatalogRegeln 0.00
    0.004066495 = product of:
      0.028465465 = sum of:
        0.028465465 = product of:
          0.05693093 = sum of:
            0.05693093 = weight(_text_:zugriff in 1657) [ClassicSimilarity], result of:
              0.05693093 = score(doc=1657,freq=2.0), product of:
                0.2160124 = queryWeight, product of:
                  5.963546 = idf(docFreq=308, maxDocs=44218)
                  0.03622214 = queryNorm
                0.26355398 = fieldWeight in 1657, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.963546 = idf(docFreq=308, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1657)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Content
    Vgl. auch den Text einer Mail vom 04.02.2015: "Nachdem uns nun, nach längerem Bemühen, ein konsortialer Zugriff zum RDA-Toolkit zu Diensten steht, haben wir unser Einführungspapier nochmal gründlich bearbeitet: http://www.allegro-c.de/regeln/rda/Vorwort_und_Einleitung.pdf. Es besteht aus einem selbstgeschriebenen Vorwort (das Original hatkeins) und einer selbstgemachten Übersetzung der Einleitung (dem Original mangelt's ein wenig an der Sprachqualität, weil man sich aus wohl erwogenen Gründen äußerst eng an die Formulierungen und Ausdruckweisen des Originals halten zu sollen meinte.) Am Ende schließt sich ein Nachwort an (im Original ebenfalls keins, ist aber auch nicht üblich bei Regelwerken), und eine Seite mit den wichtigsten Links zu bedeutenden Ressourcen. Einen eigenen Titel und ein Titelseiten-Äquivalent hat die deutsche Ausgabe auch nicht. Vermutlich wollte man mal gleich ein Beispiel schaffen für eine etwas schwierig zu katalogisierende "integrierende Ressource". Man hat's auch in Frankfurt noch nicht geschafft, dort findet man nur die 2013er Übersetzung, inzwischen überholt: http://d-nb.info/1021548286. Amazon hat noch 2 Stück und weist 57 Exemplare von anderen Anbietern nach, Einheitspreis 129.95 Euro. Toolkit Einzel-Lizenz: 161 Euro/Jahr. Im März soll der "Wiesenmüller-Horny" kommen, als Preis sind momentan 39.95 genannt, sowie 600 Euro für das PDF. (deGruyter hat die deutschen Vertriebsrechte an allem, was die alternativlose "cash cow" RDA des Verlags ALA Publishing betrifft.) Immerhin ist Haller-Popst noch noch nicht auf den Ramschtischen gelandet sondern für 59.95 im Handel, gebraucht aber ab 6.13, RAK-WB nur noch gebraucht für 2.99. " Vgl. auch: http://www.basiswissen-rda.de/blog/
  2. Delsey, T.: ¬The Making of RDA (2016) 0.00
    0.0021032572 = product of:
      0.0147228 = sum of:
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.0294456 = score(doc=2946,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    17. 5.2016 19:22:40
  3. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.00
    8.0203614E-4 = product of:
      0.0056142528 = sum of:
        0.0056142528 = product of:
          0.028071264 = sum of:
            0.028071264 = weight(_text_:system in 3373) [ClassicSimilarity], result of:
              0.028071264 = score(doc=3373,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.24605882 = fieldWeight in 3373, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  4. Eversberg, B.: Zum Thema "Migration" - Beispiel USA (2018) 0.00
    8.0203614E-4 = product of:
      0.0056142528 = sum of:
        0.0056142528 = product of:
          0.028071264 = sum of:
            0.028071264 = weight(_text_:system in 4386) [ClassicSimilarity], result of:
              0.028071264 = score(doc=4386,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.24605882 = fieldWeight in 4386, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4386)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Zu den Systemen KOHA und FOLIO gibt es folgende aktuelle Demos, die man mit allen Funktionen ausprobieren kann: KOHA Komplette Demo-Anwendung von Bywater Solutions: https://bywatersolutions.com/koha-demo user = bywater / password = bywater Empfohlen: Cataloguing, mit den MARC-Formularen und Direkt-Datenabruf per Z39 FOLIO (GBV: "The Next-Generation Library System") Demo: https://folio-demo.gbv.de/ user = diku_admin / password = admin Empfohlen: "Inventory" und dann Button "New" zum Katalogisieren Dann "Title Data" für neuen Datensatz. Das ist wohl aber noch in einem Beta-Zustand. Ferner: FOLIO-Präsentation Göttingen April 2018: https://www.zbw-mediatalk.eu/de/2018/05/folio-info-day-a-look-at-the-next-generation-library-system/
  5. Teal, W.: Alma enumerator : automating repetitive cataloging tasks with Python (2018) 0.00
    7.9397525E-4 = product of:
      0.0055578267 = sum of:
        0.0055578267 = product of:
          0.027789133 = sum of:
            0.027789133 = weight(_text_:system in 5348) [ClassicSimilarity], result of:
              0.027789133 = score(doc=5348,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 5348, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5348)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In June 2016, the Warburg College library migrated to a new integrated library system, Alma. In the process, we lost the enumeration and chronology data for roughly 79,000 print serial item records. Re-entering all this data by hand seemed an unthinkable task. Fortunately, the information was recorded as free text in each item's description field. By using Python, Alma's API and much trial and error, the Wartburg College library was able to parse the serial item descriptions into enumeration and chronology data that was uploaded back into Alma. This paper discusses the design and feasibility considerations addressed in trying to solve this problem, the complications encountered during development, and the highlights and shortcomings of the collection of Python scripts that became Alma Enumerator.
  6. Marcum, D.B.: ¬The future of cataloging (2005) 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 1086) [ClassicSimilarity], result of:
              0.023819257 = score(doc=1086,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 1086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1086)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This thought piece on the future of cataloging is long on musings and short on predictions. But that isn't to denigrate it, only to clarify it's role given the possible connotations of the title. Rather than coming up with solutions or predictions, Marcum ponders the proper role of cataloging in a Google age. Marcum cites the Google project to digitize much or all of the contents of a selected set of major research libraries as evidence that the world of cataloging is changing dramatically, and she briefly identifies ways in which the Library of Congress is responding to this new environment. But, Marcum cautions, "the future of cataloging is not something that the Library of Congress, or even the small library group with which we will meet, can or expects to resolve alone." She then poses some specific questions that should be considered, including how we can massively change our current MARC/AACR2 system without creating chaos
  7. Gatenby, J.; Thornburg, G.; Weitz, J.: Collected work clustering in WorldCat : three techniques for maintaining records (2015) 0.00
    6.2775286E-4 = product of:
      0.00439427 = sum of:
        0.00439427 = product of:
          0.02197135 = sum of:
            0.02197135 = weight(_text_:retrieval in 2276) [ClassicSimilarity], result of:
              0.02197135 = score(doc=2276,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 2276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2276)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    WorldCat records are clustered into works, and within works, into content and manifestation clusters. A recent project revisited the clustering of collected works that had been previously sidelined because of the challenges posed by their complexity. Attention was given to both the identification of collected works and to the determination of the component works within them. By extensively analysing cast-list information, performance notes, contents notes, titles, uniform titles and added entries, the contents of collected works could be identified and differentiated so that correct clustering was achieved. Further work is envisaged in the form of refining the tests and weights and also in the creation and use of name/title authority records and other knowledge cards in clustering. There is a requirement to link collected works with their component works for use in search and retrieval.
  8. Heuvelmann, R.: FRBR-Strukturierung von MAB-Daten, oder : Wieviel MAB passt in FRBR? (2005) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 466) [ClassicSimilarity], result of:
              0.01984938 = score(doc=466,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=466)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Die Expertengruppe MAB-Ausschuss (seit 2005: Expertengruppe Datenformate) hat sich im Verlauf des Jahres 2004 mit den FRBR und ihren Bezügen zum MABFormat befasst. Es wurde eine Tabelle FRBR => MAB erstellt (veröffentlicht unter http://www.ddb.de/professionell/pdf/frbr_mab.pdf), wichtige Ergebnisse wurden im Artikel "Maschinelles Austauschformat für Bibliotheken und die Functional Requirements for Bibliographic Records : Oder: Wieviel FRBR verträgt MAB?" im "Bibliotheksdienst" 39 (2005), Heft 10 zusammengefasst. Ergänzend dazu wurde bei der Arbeitsstelle Datenformate Der Deutschen Bibliothek versucht, MAB-Daten zu "frbrisieren", d. h. einzelne MAB-Datensätze in die vier Entitäten der Gruppe 1 (work / expression / manifestation / item) zu differenzieren. Ziel war nicht, einen fertigen OPAC-Baustein für die Indexierung, Benutzerführung oder Präsentation zu erstellen. Ziel war vielmehr, anhand von konkreten, in MAB strukturierten Daten die Schichten sichtbar zu machen. Ausgewählt für diesen Zweck wurde BISMAS, das "Bibliographische Informations-System zur Maschinellen Ausgabe und Suche" des BIS Oldenburg (www.bismas.de). In BISMAS ist es mit relativ geringem Aufwand möglich, die Präsentation eines Satzes - basierend auf der intern vorliegenden Datensatzstruktur, z.B. MAB - frei zu definieren. Die Gestaltung der Indices und der Ausgabeformate erfolgt in BISMAS mit Hilfe der Programmiersprache LM. Die Ergebnisse sollen hier anhand von Beispielen dargestellt werden.
  9. Gonzalez, L.: What is FRBR? (2005) 0.00
    4.5370017E-4 = product of:
      0.003175901 = sum of:
        0.003175901 = product of:
          0.015879504 = sum of:
            0.015879504 = weight(_text_:system in 3401) [ClassicSimilarity], result of:
              0.015879504 = score(doc=3401,freq=8.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 3401, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3401)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Content
    What are these two Beowulf translations "expressions" of? I used the term work above, an even more abstract concept in the FRBR model. In this case, the "work" is Beowulf , that ancient intellectual creation or effort that over time has been expressed in multiple ways, each manifested in several different ways itself, with one or more items in each manifestation. This is a pretty gross oversimplification of FRBR, which also details other relationships: among these entities; between these entities and various persons (such as creators, publishers, and owners); and between these entities and their subjects. It also specifies characteristics, or "attributes," of the different types of entities (such as title, physical media, date, availability, and more.). But it should be enough to grasp the possibilities. Now apply it Imagine that you have a patron who needs a copy of Heaney's translation of Beowulf . She doesn't care who published it or when, only that it's Heaney's translation. What if you (or your patron) could place an interlibrary loan call on that expression, instead of looking through multiple bibliographic records (as of March, OCLC's WorldCat had nine regular print editions) for multiple manifestations and then judging which record is the best bet on which to place a request? Combine that with functionality that lets you specify "not Braille, not large print," and it could save you time. Now imagine a patron in want of a copy, any copy, in English, of Romeo and Juliet. Saving staff time means saving money. Whether or not this actually happens depends upon what the library community decides to do with FRBR. It is not a set of cataloging rules or a system design, but it can influence both. Several library system vendors are working with FRBR ideas; VTLS's current integrated library system product Virtua incorporates FRBR concepts in its design. More vendors may follow. How the Joint Steering Committee for Revision of Anglo-American Cataloging Rules develops the Anglo-American Cataloging Rules (AACR) to incorporate FRBR will necessarily be a strong determinant of how records work in a "FRBR-ized" bibliographic database.
    National FRBR experiments The larger the bibliographic database, the greater the effect of "FRBR-like" design in reducing the appearance of duplicate records. LC, RLG, and OCLC, all influenced by FRBR, are experimenting with the redesign of their databases. LC's Network Development and MARC Standards Office has posted at its web site the results of some of its investigations into FRBR and MARC, including possible display options for bibliographic information. The design of RLG's public catalog, RedLightGreen, has been described as "FRBR-ish" by Merrilee Proffitt, RLG's program officer. If you try a search for a prolific author or much-published title in RedLightGreen, you'll probably find that the display of search results is much different than what you would expect. OCLC Research has developed a prototype "frbrized" database for fiction, OCLC FictionFinder. Try a title search for a classic title like Romeo and Juliet and observe that OCLC includes, in the initial display of results (described as "works"), a graphic indicator (stars, ranging from one to five). These show in rough terms how many libraries own the work-Romeo and Juliet clearly gets a five. Indicators like this are something resource sharing staff can consider an "ILL quality rating." If you're intrigued by FRBR's possibilities and what they could mean to resource sharing workflow, start talking. Now is the time to connect with colleagues, your local and/or consortial system vendor, RLG, OCLC, and your professional organizations. Have input into how systems develop in the FRBR world."
  10. Mimno, D.; Crane, G.; Jones, A.: Hierarchical catalog records : implementing a FRBR catalog (2005) 0.00
    4.5370017E-4 = product of:
      0.003175901 = sum of:
        0.003175901 = product of:
          0.015879504 = sum of:
            0.015879504 = weight(_text_:system in 1183) [ClassicSimilarity], result of:
              0.015879504 = score(doc=1183,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 1183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1183)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    IFLA's Functional Requirements for Bibliographic Records (FRBR) lay the foundation for a new generation of cataloging systems that recognize the difference between a particular work (e.g., Moby Dick), diverse expressions of that work (e.g., translations into German, Japanese and other languages), different versions of the same basic text (e.g., the Modern Library Classics vs. Penguin editions), and particular items (a copy of Moby Dick on the shelf). Much work has gone into finding ways to infer FRBR relationships between existing catalog records and modifying catalog interfaces to display those relationships. Relatively little work, however, has gone into exploring the creation of catalog records that are inherently based on the FRBR hierarchy of works, expressions, manifestations, and items. The Perseus Digital Library has created a new catalog that implements such a system for a small collection that includes many works with multiple versions. We have used this catalog to explore some of the implications of hierarchical catalog records for searching and browsing. Current online library catalog interfaces present many problems for searching. One commonly cited failure is the inability to find and collocate all versions of a distinct intellectual work that exist in a collection and the inability to take into account known variations in titles and personal names (Yee 2005). The IFLA Functional Requirements for Bibliographic Records (FRBR) attempts to address some of these failings by introducing the concept of multiple interrelated bibliographic entities (IFLA 1998). In particular, relationships between abstract intellectual works and the various published instances of those works are divided into a four-level hierarchy of works (such as the Aeneid), expressions (Robert Fitzgerald's translation of the Aeneid), manifestations (a particular paperback edition of Robert Fitzgerald's translation of the Aeneid), and items (my copy of a particular paperback edition of Robert Fitzgerald's translation of the Aeneid). In this formulation, each level in the hierarchy "inherits" information from the preceding level. Much of the work on FRBRized catalogs so far has focused on organizing existing records that describe individual physical books. Relatively little work has gone into rethinking what information should be in catalog records, or how the records should relate to each other. It is clear, however, that a more "native" FRBR catalog would include separate records for works, expressions, manifestations, and items. In this way, all information about a work would be centralized in one record. Records for subsequent expressions of that work would add only the information specific to each expression: Samuel Butler's translation of the Iliad does not need to repeat the fact that the work was written by Homer. This approach has certain inherent advantages for collections with many versions of the same works: new publications can be cataloged more quickly, and records can be stored and updated more efficiently.
  11. Report on the future of bibliographic control : draft for public comment (2007) 0.00
    3.4027512E-4 = product of:
      0.0023819257 = sum of:
        0.0023819257 = product of:
          0.011909628 = sum of:
            0.011909628 = weight(_text_:system in 1271) [ClassicSimilarity], result of:
              0.011909628 = score(doc=1271,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.104393914 = fieldWeight in 1271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1271)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based. Its realization will occur in cooperation with the private sector, and with the active collaboration of library users. Data will be gathered from multiple sources; change will happen quickly; and bibliographic control will be dynamic, not static. The underlying technology that makes this future possible and necessary-the World Wide Web-is now almost two decades old. Libraries must continue the transition to this future without delay in order to retain their relevance as information providers. The Working Group on the Future of Bibliographic Control encourages the library community to take a thoughtful and coordinated approach to effecting significant changes in bibliographic control. Such an approach will call for leadership that is neither unitary nor centralized. Nor will the responsibility to provide such leadership fall solely to the Library of Congress (LC). That said, the Working Group recognizes that LC plays a unique role in the library community of the United States, and the directions that LC takes have great impact on all libraries. We also recognize that there are many other institutions and organizations that have the expertise and the capacity to play significant roles in the bibliographic future. Wherever possible, those institutions must step forward and take responsibility for assisting with navigating the transition and for playing appropriate ongoing roles after that transition is complete. To achieve the goals set out in this document, we must look beyond individual libraries to a system wide deployment of resources. We must realize efficiencies in order to be able to reallocate resources from certain lower-value components of the bibliographic control ecosystem into other higher-value components of that same ecosystem. The recommendations in this report are directed at a number of parties, indicated either by their common initialism (e.g., "LC" for Library of Congress, "PCC" for Program for Cooperative Cataloging) or by their general category (e.g., "Publishers," "National Libraries"). When the recommendation is addressed to "All," it is intended for the library community as a whole and its close collaborators.