Search (16 results, page 1 of 1)

  • × theme_ss:"Formalerschließung"
  • × type_ss:"el"
  1. Unkhoff-Giske, B.: Umfrage zur RDA-Einführung in der Universitätsbibliothek Trier (2018) 0.01
    0.0050416538 = product of:
      0.045374885 = sum of:
        0.045374885 = product of:
          0.09074977 = sum of:
            0.09074977 = weight(_text_:bewertung in 4343) [ClassicSimilarity], result of:
              0.09074977 = score(doc=4343,freq=2.0), product of:
                0.18575147 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.02940506 = queryNorm
                0.48855478 = fieldWeight in 4343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4343)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Nach fast einem Jahr praktischer Erfahrung mit dem neuen Regelwerk wurde in der Universitätsbibliothek Trier eine Umfrage zur Einführung von "Resource Description and Access (RDA)" durchgeführt. Dabei ging es um den Katalogisierungsalltag: die Sicherheit im Umgang mit RDA und dem Toolkit, die Informationsversorgung, Änderungen beim Arbeitsaufwand, die Bewertung der RDA-Regelungen, insbesondere des neuen Prinzips "Cataloguer's judgement" und die persönliche Einstellung gegenüber dem Regelwerksumstieg. Die Ergebnisse der 20 Fragen werden im folgenden Beitrag vorgestellt und analysiert. Die Umfrage führte zu überraschend positiven Erkenntnissen, deckte aber auch Problemfelder auf, die der Nachbearbeitung bedürfen.
  2. Eversberg, B.: KatalogRegeln 0.00
    0.0022649334 = product of:
      0.020384401 = sum of:
        0.020384401 = product of:
          0.040768802 = sum of:
            0.040768802 = weight(_text_:seite in 1657) [ClassicSimilarity], result of:
              0.040768802 = score(doc=1657,freq=2.0), product of:
                0.16469958 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.02940506 = queryNorm
                0.24753433 = fieldWeight in 1657, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1657)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Content
    Vgl. auch den Text einer Mail vom 04.02.2015: "Nachdem uns nun, nach längerem Bemühen, ein konsortialer Zugriff zum RDA-Toolkit zu Diensten steht, haben wir unser Einführungspapier nochmal gründlich bearbeitet: http://www.allegro-c.de/regeln/rda/Vorwort_und_Einleitung.pdf. Es besteht aus einem selbstgeschriebenen Vorwort (das Original hatkeins) und einer selbstgemachten Übersetzung der Einleitung (dem Original mangelt's ein wenig an der Sprachqualität, weil man sich aus wohl erwogenen Gründen äußerst eng an die Formulierungen und Ausdruckweisen des Originals halten zu sollen meinte.) Am Ende schließt sich ein Nachwort an (im Original ebenfalls keins, ist aber auch nicht üblich bei Regelwerken), und eine Seite mit den wichtigsten Links zu bedeutenden Ressourcen. Einen eigenen Titel und ein Titelseiten-Äquivalent hat die deutsche Ausgabe auch nicht. Vermutlich wollte man mal gleich ein Beispiel schaffen für eine etwas schwierig zu katalogisierende "integrierende Ressource". Man hat's auch in Frankfurt noch nicht geschafft, dort findet man nur die 2013er Übersetzung, inzwischen überholt: http://d-nb.info/1021548286. Amazon hat noch 2 Stück und weist 57 Exemplare von anderen Anbietern nach, Einheitspreis 129.95 Euro. Toolkit Einzel-Lizenz: 161 Euro/Jahr. Im März soll der "Wiesenmüller-Horny" kommen, als Preis sind momentan 39.95 genannt, sowie 600 Euro für das PDF. (deGruyter hat die deutschen Vertriebsrechte an allem, was die alternativlose "cash cow" RDA des Verlags ALA Publishing betrifft.) Immerhin ist Haller-Popst noch noch nicht auf den Ramschtischen gelandet sondern für 59.95 im Handel, gebraucht aber ab 6.13, RAK-WB nur noch gebraucht für 2.99. " Vgl. auch: http://www.basiswissen-rda.de/blog/
  3. Leresche, F.; Boulet, V.: RDA as a tool for the bibliographic transition : the French position (2016) 0.00
    0.0016311385 = product of:
      0.014680246 = sum of:
        0.014680246 = product of:
          0.029360492 = sum of:
            0.029360492 = weight(_text_:web in 2953) [ClassicSimilarity], result of:
              0.029360492 = score(doc=2953,freq=4.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.3059541 = fieldWeight in 2953, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2953)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    This article presents the process adopted by the France to bring library catalogs to the Web of data and the RDA role in this general strategy. After analising RDA limits and inconsistencies, inherited from the tradition of AACR and MARC21 catalogues, the authors present the French approach to RDA and its positioning in correlation to international standards like ISBD and FRBR. The method adopted in France for FRBRising the catalogues go through a technical work of creating alignment beteween existing data, exploiting the technologies applied to the creation of data.bnf.fr and through a revision of the French cataloguing rules, allowing FRBRised metadata creation. This revision is based on RDA and it is setting up a French RDA application profile, keeping the analysis on the greater differences. RDA adoption, actually, is not a crucial issue in France and not a self standing purpose; it is just a tool for the transition of bibliographic data towards the Web of data.
  4. Forero, D.; Peterson, N.; Hamilton, A.: Building an institutional author search tool (2019) 0.00
    0.0013456206 = product of:
      0.012110585 = sum of:
        0.012110585 = product of:
          0.02422117 = sum of:
            0.02422117 = weight(_text_:web in 5441) [ClassicSimilarity], result of:
              0.02422117 = score(doc=5441,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.25239927 = fieldWeight in 5441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5441)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Ability to collect time-specific lists of faculty publications has become increasingly important for academic departments. At OHSU publication lists had been retrieved manually by a librarian who conducted literature searches in bibliographic databases. These searches were complicated and time consuming, and the results were large and difficult to assess for accuracy. The OHSU library has built an open web page that allows novices to make very sophisticated institution-specific queries. The tool frees up library staff, provides users with an easy way of retrieving reliable local publication information from PubMed, and gives an opportunity for more sophisticated users to modify the algorithm or dive into the data to better understand nuances from a strong jumping off point.
  5. Delsey, T.: ¬The Making of RDA (2016) 0.00
    0.0013279931 = product of:
      0.011951938 = sum of:
        0.011951938 = product of:
          0.023903877 = sum of:
            0.023903877 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.023903877 = score(doc=2946,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    17. 5.2016 19:22:40
  6. Morris, S.R.; Wiggins, B.: Implementing RDA at the Library of Congress (2016) 0.00
    0.001153389 = product of:
      0.010380501 = sum of:
        0.010380501 = product of:
          0.020761002 = sum of:
            0.020761002 = weight(_text_:web in 2947) [ClassicSimilarity], result of:
              0.020761002 = score(doc=2947,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.21634221 = fieldWeight in 2947, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2947)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The Toolkit designed by the RDA Steering Committee makes Resource Description and Access available on the web, together with other useful documents (workflows, The process of implementation of RDA by Library of Congress, National Agricultural Library, and National Library of Medicine is presented. Each phase of development, test, decision, preparation for implementation of RDA and training about RDA is fully and accurately described and discussed. Benefits from implementation of RDA for the Library of Congress are identified and highlighted: more flexibility in cataloguing decisions, easier international sharing of cataloguing data, clearer linking among related works; closer cooperation with other libraries in the North American community, production of an online learning platform in order to deliver RDA training on a large scale in real time to catalogers.
  7. Bianchini, C.; Guerrini, M.: RDA: a content standard to ensure the quality of data : history of a relationship (2016) 0.00
    0.001153389 = product of:
      0.010380501 = sum of:
        0.010380501 = product of:
          0.020761002 = sum of:
            0.020761002 = weight(_text_:web in 2948) [ClassicSimilarity], result of:
              0.020761002 = score(doc=2948,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.21634221 = fieldWeight in 2948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2948)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    RDA Resource Description and Access are guidelines for description and access to resources designed for digital environment and released, in its first version, in 2010. RDA is based on FRBR and its derived models, that focus on users' needs and on resources of any kind of content, medium and carrier. The paper discusses relevance of main features of RDA for the future role of libraries in the context of semantic web and metadata creation and exchange. The paper aims to highlight many consequences deriving from RDA being a content standard, and in particular the change from record management to data management, differences among the two functions realized by RDA (to identify and to relate entities) and functions realized by other standard such as MARC21 (to archive data) and ISB (to visualize data) and show how, as all these functions are necessary for the catalog, RDA needs to be integrated by other rules and standard and that these tools allow the fulfilment of the variation principle defined by S.R. Ranganathan.
  8. Escolano Rodrìguez, E.: RDA e ISBD : history of a relationship (2016) 0.00
    0.001153389 = product of:
      0.010380501 = sum of:
        0.010380501 = product of:
          0.020761002 = sum of:
            0.020761002 = weight(_text_:web in 2951) [ClassicSimilarity], result of:
              0.020761002 = score(doc=2951,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.21634221 = fieldWeight in 2951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2951)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    This article attempts to clarify the nature of the relationship between the RDA and ISBD standards in order to be able to understand their differences and vinculations, as well as to remove some misinterpretations about this relationship. With this objective, some aspects that can affect their differences, such as types of standards, points of view, scope, origin, policies of the creation and development group or organization in charge that logically justify these differences, are analyzed. These have not presented any obstacles for a correct relationship with the help of the Linked Data technology. In this article, account is also given of the work done of mappings and alignments between the standards in order to contribute properly to the Semantic Web. This knowledge is the one fundamental required for current catalogers to use standards judiciously, knowledgeably and responsibly.
  9. Belpassi, E.: ¬The application software RIMMF : RDA thinking in action (2016) 0.00
    0.001153389 = product of:
      0.010380501 = sum of:
        0.010380501 = product of:
          0.020761002 = sum of:
            0.020761002 = weight(_text_:web in 2959) [ClassicSimilarity], result of:
              0.020761002 = score(doc=2959,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.21634221 = fieldWeight in 2959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2959)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    RIMMF software is grew out of the need to visualize and realize records according to the RDA guidelines. The article describes the software structure and features in the creation of a r­ball, that is a small database populated by recordings of bibliographic and authority resources enriched by relationships between and among entities involved. At first it's introduced the need that led to RIMMF outcome, then starts the software functional analysis. With a description of the main steps of the r-ball building, emphasizing the issues raised. The results highlights some critical aspects, but above all the wide scope of possible developments that open the Cultural Heritage Institutions horizon to the web prospective. Conclusions display the RDF-linked­data development of the RIMMF incoming future.
  10. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.00
    0.001153389 = product of:
      0.010380501 = sum of:
        0.010380501 = product of:
          0.020761002 = sum of:
            0.020761002 = weight(_text_:web in 3523) [ClassicSimilarity], result of:
              0.020761002 = score(doc=3523,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.21634221 = fieldWeight in 3523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3523)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
  11. Report on the future of bibliographic control : draft for public comment (2007) 0.00
    9.988644E-4 = product of:
      0.008989779 = sum of:
        0.008989779 = product of:
          0.017979559 = sum of:
            0.017979559 = weight(_text_:web in 1271) [ClassicSimilarity], result of:
              0.017979559 = score(doc=1271,freq=6.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18735787 = fieldWeight in 1271, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1271)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based. Its realization will occur in cooperation with the private sector, and with the active collaboration of library users. Data will be gathered from multiple sources; change will happen quickly; and bibliographic control will be dynamic, not static. The underlying technology that makes this future possible and necessary-the World Wide Web-is now almost two decades old. Libraries must continue the transition to this future without delay in order to retain their relevance as information providers. The Working Group on the Future of Bibliographic Control encourages the library community to take a thoughtful and coordinated approach to effecting significant changes in bibliographic control. Such an approach will call for leadership that is neither unitary nor centralized. Nor will the responsibility to provide such leadership fall solely to the Library of Congress (LC). That said, the Working Group recognizes that LC plays a unique role in the library community of the United States, and the directions that LC takes have great impact on all libraries. We also recognize that there are many other institutions and organizations that have the expertise and the capacity to play significant roles in the bibliographic future. Wherever possible, those institutions must step forward and take responsibility for assisting with navigating the transition and for playing appropriate ongoing roles after that transition is complete. To achieve the goals set out in this document, we must look beyond individual libraries to a system wide deployment of resources. We must realize efficiencies in order to be able to reallocate resources from certain lower-value components of the bibliographic control ecosystem into other higher-value components of that same ecosystem. The recommendations in this report are directed at a number of parties, indicated either by their common initialism (e.g., "LC" for Library of Congress, "PCC" for Program for Cooperative Cataloging) or by their general category (e.g., "Publishers," "National Libraries"). When the recommendation is addressed to "All," it is intended for the library community as a whole and its close collaborators.
    The Library of Congress must begin by prioritizing the recommendations that are directed in whole or in part at LC. Some define tasks that can be achieved immediately and with moderate effort; others will require analysis and planning that will have to be coordinated broadly and carefully. The Working Group has consciously not associated time frames with any of its recommendations. The recommendations fall into five general areas: 1. Increase the efficiency of bibliographic production for all libraries through increased cooperation and increased sharing of bibliographic records, and by maximizing the use of data produced throughout the entire "supply chain" for information resources. 2. Transfer effort into higher-value activity. In particular, expand the possibilities for knowledge creation by "exposing" rare and unique materials held by libraries that are currently hidden from view and, thus, underused. 3. Position our technology for the future by recognizing that the World Wide Web is both our technology platform and the appropriate platform for the delivery of our standards. Recognize that people are not the only users of the data we produce in the name of bibliographic control, but so too are machine applications that interact with those data in a variety of ways. 4. Position our community for the future by facilitating the incorporation of evaluative and other user-supplied information into our resource descriptions. Work to realize the potential of the FRBR framework for revealing and capitalizing on the various relationships that exist among information resources. 5. Strengthen the library profession through education and the development of metrics that will inform decision-making now and in the future. The Working Group intends what follows to serve as a broad blueprint for the Library of Congress and its colleagues in the library and information technology communities for extending and promoting access to information resources.
  12. Harlow, C.: Data munging tools in Preparation for RDF : Catmandu and LODRefine (2015) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 2277) [ClassicSimilarity], result of:
              0.017300837 = score(doc=2277,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 2277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2277)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Theme
    Semantic Web
  13. Galeffi, A.; Sardo, A.L.: Cataloguing, a necessary evil : critical aspects of RDA (2016) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 2952) [ClassicSimilarity], result of:
              0.017300837 = score(doc=2952,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 2952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2952)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The Toolkit designed by the RDA Steering Committee makes Resource Description and Access available on the web, together with other useful documents (workflows, mappings, etc.). Reading, learning and memorizing are interconnected, and a working tool should make these activities faster and easier to perform. Some issues arise while verifying the real easiness of use and learning of the tool. The practical and formal requirements for a cataloguing code include plain language, ease of memorisation, clarity of instructions, familiarity for users, predictability and reproducibility of solutions, and general usability. From a formal point of view, the RDA text does not appear to be conceived for an uninterrupted reading, but just for reading of few paragraphs for temporary catalographic needs. From a content point of view, having a syndetic view of the description of a resource is rather difficult: catalographic details are scattered and their re-organization is not easy. The visualisation and logical organisation in the Toolkit could be improved: the table of contents occupies a sizable portion of the screen and resizing or hiding it is not easy; the indentation leaves little space to the words; inhomogeneous font styles (italic and bold) and poor contrast between background and text colours make reading not easy; simultaneous visualization of two or more parts of the text is not allowed; and Toolkit's icons are less intuitive than expected. In the conclusion, some suggestions on how to improve the Toolkit's aspects and usability are provided.
  14. Leresche, F.: Libraries and archives : sharing standards to facilitate access to cultural heritage (2008) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 1425) [ClassicSimilarity], result of:
              0.013840669 = score(doc=1425,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 1425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1425)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    This presentation shares the French experience of collaboration between archivists and librarians, led by working groups with the Association française de normalisation (AFNOR). With the arrival of the Web, the various heritage institutions are increasingly aware of their areas of commonality and the need for interoperability between their catalogues. This is particularly true for archives and libraries, which have developed standards for meeting their specific needs Regarding document description, but which are now seeking to establish a dialogue for defining a coherent set of standards to which professionals in both communities can refer. After discussing the characteristics of the collections held respectively in archives and libraries, this presentation will draw a portrait of the standards established by the two professional communities in the following areas: - description of documents - access points in descriptions and authority records - description of functions - identification of conservation institutions and collections It is concluded from this study that the standards developed by libraries on the one hand and by archives on the other are most often complementary and that each professional community is being driven to use the standards developed by the other, or would at least profit from doing so. A dialogue between the two professions is seen today as a necessity for fostering the compatibility and interoperability of standards and documentary tools. Despite this recognition of the need for collaboration, the development of standards is still largely a compartmentalized process, and the fact that normative work is conducted within professional associations is a contributing factor. The French experience shows, however, that it is possible to create working groups where archivists and librarians unite and develop a comprehensive view of the standards and initiatives conducted by each, with the goal of articulating them as best they can for the purpose of interoperability, yet respecting the specific requirements of each.
  15. Babeu, A.: Building a "FRBR-inspired" catalog : the Perseus digital library experience (2008) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 2429) [ClassicSimilarity], result of:
              0.013840669 = score(doc=2429,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 2429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2429)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Our catalog should not be called a FRBR catalog perhaps, but instead a "FRBR Inspired catalog." As such our main goal has been "practical findability," we are seeking to support the four identified user tasks of the FRBR model, or to "Search, Identify, Select, and Obtain," rather than to create a FRBR catalog, per se. By encoding as much information as possible in the MODS and MADS records we have created, we believe that useful searching will be supported, that by using unique identifiers for works and authors users will be able to identify that the entity they have located is the desired one, that by encoding expression level information (such as the language of the work, the translator, etc) users will be able to select which expression of a work they are interested in, and that by supplying links to different online manifestations that users will be able to obtain access to a digital copy of a work. This white paper will discuss previous and current efforts by the Perseus Project in creating a FRBRized catalog, including the cataloging workflow, lessons learned during the process and will also seek to place this work in the larger context of research regarding FRBR, cataloging, Library 2.0 and the Semantic Web, and the growing importance of the FRBR model in the face of growing million book digital libraries.
  16. Gonzalez, L.: What is FRBR? (2005) 0.00
    3.8446303E-4 = product of:
      0.0034601672 = sum of:
        0.0034601672 = product of:
          0.0069203344 = sum of:
            0.0069203344 = weight(_text_:web in 3401) [ClassicSimilarity], result of:
              0.0069203344 = score(doc=3401,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.07211407 = fieldWeight in 3401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3401)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Content
    National FRBR experiments The larger the bibliographic database, the greater the effect of "FRBR-like" design in reducing the appearance of duplicate records. LC, RLG, and OCLC, all influenced by FRBR, are experimenting with the redesign of their databases. LC's Network Development and MARC Standards Office has posted at its web site the results of some of its investigations into FRBR and MARC, including possible display options for bibliographic information. The design of RLG's public catalog, RedLightGreen, has been described as "FRBR-ish" by Merrilee Proffitt, RLG's program officer. If you try a search for a prolific author or much-published title in RedLightGreen, you'll probably find that the display of search results is much different than what you would expect. OCLC Research has developed a prototype "frbrized" database for fiction, OCLC FictionFinder. Try a title search for a classic title like Romeo and Juliet and observe that OCLC includes, in the initial display of results (described as "works"), a graphic indicator (stars, ranging from one to five). These show in rough terms how many libraries own the work-Romeo and Juliet clearly gets a five. Indicators like this are something resource sharing staff can consider an "ILL quality rating." If you're intrigued by FRBR's possibilities and what they could mean to resource sharing workflow, start talking. Now is the time to connect with colleagues, your local and/or consortial system vendor, RLG, OCLC, and your professional organizations. Have input into how systems develop in the FRBR world."