Search (12 results, page 1 of 1)

  • × theme_ss:"Formalerschließung"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Bianchini, C.; Guerrini, M.: RDA: a content standard to ensure the quality of data : history of a relationship (2016) 0.03
    0.027754819 = product of:
      0.06938705 = sum of:
        0.05304678 = weight(_text_:semantic in 2948) [ClassicSimilarity], result of:
          0.05304678 = score(doc=2948,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 2948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=2948)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 2948) [ClassicSimilarity], result of:
              0.03268054 = score(doc=2948,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 2948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2948)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    RDA Resource Description and Access are guidelines for description and access to resources designed for digital environment and released, in its first version, in 2010. RDA is based on FRBR and its derived models, that focus on users' needs and on resources of any kind of content, medium and carrier. The paper discusses relevance of main features of RDA for the future role of libraries in the context of semantic web and metadata creation and exchange. The paper aims to highlight many consequences deriving from RDA being a content standard, and in particular the change from record management to data management, differences among the two functions realized by RDA (to identify and to relate entities) and functions realized by other standard such as MARC21 (to archive data) and ISB (to visualize data) and show how, as all these functions are necessary for the catalog, RDA needs to be integrated by other rules and standard and that these tools allow the fulfilment of the variation principle defined by S.R. Ranganathan.
  2. Escolano Rodrìguez, E.: RDA e ISBD : history of a relationship (2016) 0.03
    0.027754819 = product of:
      0.06938705 = sum of:
        0.05304678 = weight(_text_:semantic in 2951) [ClassicSimilarity], result of:
          0.05304678 = score(doc=2951,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 2951, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=2951)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 2951) [ClassicSimilarity], result of:
              0.03268054 = score(doc=2951,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 2951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2951)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article attempts to clarify the nature of the relationship between the RDA and ISBD standards in order to be able to understand their differences and vinculations, as well as to remove some misinterpretations about this relationship. With this objective, some aspects that can affect their differences, such as types of standards, points of view, scope, origin, policies of the creation and development group or organization in charge that logically justify these differences, are analyzed. These have not presented any obstacles for a correct relationship with the help of the Linked Data technology. In this article, account is also given of the work done of mappings and alignments between the standards in order to contribute properly to the Semantic Web. This knowledge is the one fundamental required for current catalogers to use standards judiciously, knowledgeably and responsibly.
  3. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.03
    0.027754819 = product of:
      0.06938705 = sum of:
        0.05304678 = weight(_text_:semantic in 3523) [ClassicSimilarity], result of:
          0.05304678 = score(doc=3523,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 3523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 3523) [ClassicSimilarity], result of:
              0.03268054 = score(doc=3523,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 3523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3523)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
  4. Harlow, C.: Data munging tools in Preparation for RDF : Catmandu and LODRefine (2015) 0.02
    0.023129018 = product of:
      0.057822544 = sum of:
        0.04420565 = weight(_text_:semantic in 2277) [ClassicSimilarity], result of:
          0.04420565 = score(doc=2277,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.22969149 = fieldWeight in 2277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2277)
        0.013616893 = product of:
          0.027233787 = sum of:
            0.027233787 = weight(_text_:web in 2277) [ClassicSimilarity], result of:
              0.027233787 = score(doc=2277,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.18028519 = fieldWeight in 2277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2277)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Theme
    Semantic Web
  5. Lee, W.-C.: Conflicts of semantic warrants in cataloging practices (2017) 0.02
    0.015313287 = product of:
      0.076566435 = sum of:
        0.076566435 = weight(_text_:semantic in 3871) [ClassicSimilarity], result of:
          0.076566435 = score(doc=3871,freq=6.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.39783734 = fieldWeight in 3871, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3871)
      0.2 = coord(1/5)
    
    Abstract
    This study presents preliminary themes surfaced from an ongoing ethnographic study. The research question is: how and where do cultures influence the cataloging practices of using U.S. standards to catalog Chinese materials? The author applies warrant as a lens for evaluating knowledge representation systems, and extends the application from examining classificatory decisions to cataloging decisions. Semantic warrant as a conceptual tool allows us to recognize and name the various rationales behind cataloging decisions, grants us explanatory power, and the language to "visualize" and reflect on the conflicting priorities in cataloging practices. Through participatory observation, the author recorded the cataloging practices of two Chinese catalogers working on the same cataloging project. One of the catalogers is U.S. trained, and another cataloger is a professor of Library and Information Science from China, who is also a subject expert and a cataloger of Chinese special collections. The study shows how the catalogers describe Chinese special collections using many U.S. cataloging and classification standards but from different approaches. The author presents particular cases derived from the fieldwork, with an emphasis on the many layers presented by cultures, principles, standards, and practices of different scope, each of which may represent conflicting warrants. From this, it is made clear that the conflicts of warrants influence cataloging practice. We may view the conflicting warrants as an interpretation of the tension between different semantic warrants and the globalization and localization of cataloging standards.
  6. Gatenby, J.; Thornburg, G.; Weitz, J.: Collected work clustering in WorldCat : three techniques for maintaining records (2015) 0.01
    0.0056153345 = product of:
      0.028076671 = sum of:
        0.028076671 = weight(_text_:retrieval in 2276) [ClassicSimilarity], result of:
          0.028076671 = score(doc=2276,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 2276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2276)
      0.2 = coord(1/5)
    
    Abstract
    WorldCat records are clustered into works, and within works, into content and manifestation clusters. A recent project revisited the clustering of collected works that had been previously sidelined because of the challenges posed by their complexity. Attention was given to both the identification of collected works and to the determination of the component works within them. By extensively analysing cast-list information, performance notes, contents notes, titles, uniform titles and added entries, the contents of collected works could be identified and differentiated so that correct clustering was achieved. Further work is envisaged in the form of refining the tests and weights and also in the creation and use of name/title authority records and other knowledge cards in clustering. There is a requirement to link collected works with their component works for use in search and retrieval.
  7. Leresche, F.; Boulet, V.: RDA as a tool for the bibliographic transition : the French position (2016) 0.00
    0.004621727 = product of:
      0.023108633 = sum of:
        0.023108633 = product of:
          0.046217266 = sum of:
            0.046217266 = weight(_text_:web in 2953) [ClassicSimilarity], result of:
              0.046217266 = score(doc=2953,freq=4.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.3059541 = fieldWeight in 2953, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2953)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This article presents the process adopted by the France to bring library catalogs to the Web of data and the RDA role in this general strategy. After analising RDA limits and inconsistencies, inherited from the tradition of AACR and MARC21 catalogues, the authors present the French approach to RDA and its positioning in correlation to international standards like ISBD and FRBR. The method adopted in France for FRBRising the catalogues go through a technical work of creating alignment beteween existing data, exploiting the technologies applied to the creation of data.bnf.fr and through a revision of the French cataloguing rules, allowing FRBRised metadata creation. This revision is based on RDA and it is setting up a French RDA application profile, keeping the analysis on the greater differences. RDA adoption, actually, is not a crucial issue in France and not a self standing purpose; it is just a tool for the transition of bibliographic data towards the Web of data.
  8. Forero, D.; Peterson, N.; Hamilton, A.: Building an institutional author search tool (2019) 0.00
    0.0038127303 = product of:
      0.019063652 = sum of:
        0.019063652 = product of:
          0.038127303 = sum of:
            0.038127303 = weight(_text_:web in 5441) [ClassicSimilarity], result of:
              0.038127303 = score(doc=5441,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.25239927 = fieldWeight in 5441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5441)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Ability to collect time-specific lists of faculty publications has become increasingly important for academic departments. At OHSU publication lists had been retrieved manually by a librarian who conducted literature searches in bibliographic databases. These searches were complicated and time consuming, and the results were large and difficult to assess for accuracy. The OHSU library has built an open web page that allows novices to make very sophisticated institution-specific queries. The tool frees up library staff, provides users with an easy way of retrieving reliable local publication information from PubMed, and gives an opportunity for more sophisticated users to modify the algorithm or dive into the data to better understand nuances from a strong jumping off point.
  9. Delsey, T.: ¬The Making of RDA (2016) 0.00
    0.003762784 = product of:
      0.01881392 = sum of:
        0.01881392 = product of:
          0.03762784 = sum of:
            0.03762784 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.03762784 = score(doc=2946,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    17. 5.2016 19:22:40
  10. Morris, S.R.; Wiggins, B.: Implementing RDA at the Library of Congress (2016) 0.00
    0.0032680542 = product of:
      0.01634027 = sum of:
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 2947) [ClassicSimilarity], result of:
              0.03268054 = score(doc=2947,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 2947, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2947)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Toolkit designed by the RDA Steering Committee makes Resource Description and Access available on the web, together with other useful documents (workflows, The process of implementation of RDA by Library of Congress, National Agricultural Library, and National Library of Medicine is presented. Each phase of development, test, decision, preparation for implementation of RDA and training about RDA is fully and accurately described and discussed. Benefits from implementation of RDA for the Library of Congress are identified and highlighted: more flexibility in cataloguing decisions, easier international sharing of cataloguing data, clearer linking among related works; closer cooperation with other libraries in the North American community, production of an online learning platform in order to deliver RDA training on a large scale in real time to catalogers.
  11. Belpassi, E.: ¬The application software RIMMF : RDA thinking in action (2016) 0.00
    0.0032680542 = product of:
      0.01634027 = sum of:
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 2959) [ClassicSimilarity], result of:
              0.03268054 = score(doc=2959,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 2959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2959)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    RIMMF software is grew out of the need to visualize and realize records according to the RDA guidelines. The article describes the software structure and features in the creation of a r­ball, that is a small database populated by recordings of bibliographic and authority resources enriched by relationships between and among entities involved. At first it's introduced the need that led to RIMMF outcome, then starts the software functional analysis. With a description of the main steps of the r-ball building, emphasizing the issues raised. The results highlights some critical aspects, but above all the wide scope of possible developments that open the Cultural Heritage Institutions horizon to the web prospective. Conclusions display the RDF-linked­data development of the RIMMF incoming future.
  12. Galeffi, A.; Sardo, A.L.: Cataloguing, a necessary evil : critical aspects of RDA (2016) 0.00
    0.0027233788 = product of:
      0.013616893 = sum of:
        0.013616893 = product of:
          0.027233787 = sum of:
            0.027233787 = weight(_text_:web in 2952) [ClassicSimilarity], result of:
              0.027233787 = score(doc=2952,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.18028519 = fieldWeight in 2952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2952)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Toolkit designed by the RDA Steering Committee makes Resource Description and Access available on the web, together with other useful documents (workflows, mappings, etc.). Reading, learning and memorizing are interconnected, and a working tool should make these activities faster and easier to perform. Some issues arise while verifying the real easiness of use and learning of the tool. The practical and formal requirements for a cataloguing code include plain language, ease of memorisation, clarity of instructions, familiarity for users, predictability and reproducibility of solutions, and general usability. From a formal point of view, the RDA text does not appear to be conceived for an uninterrupted reading, but just for reading of few paragraphs for temporary catalographic needs. From a content point of view, having a syndetic view of the description of a resource is rather difficult: catalographic details are scattered and their re-organization is not easy. The visualisation and logical organisation in the Toolkit could be improved: the table of contents occupies a sizable portion of the screen and resizing or hiding it is not easy; the indentation leaves little space to the words; inhomogeneous font styles (italic and bold) and poor contrast between background and text colours make reading not easy; simultaneous visualization of two or more parts of the text is not allowed; and Toolkit's icons are less intuitive than expected. In the conclusion, some suggestions on how to improve the Toolkit's aspects and usability are provided.