Search (306 results, page 1 of 16)

  • × theme_ss:"Formalerschließung"
  • × year_i:[2010 TO 2020}
  1. Gartner, R.: Metadata : shaping knowledge from antiquity to the semantic web (2016) 0.04
    0.0404981 = product of:
      0.12149429 = sum of:
        0.059322387 = weight(_text_:applications in 731) [ClassicSimilarity], result of:
          0.059322387 = score(doc=731,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
        0.0140020205 = weight(_text_:of in 731) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=731,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 731, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
        0.048169892 = weight(_text_:software in 731) [ClassicSimilarity], result of:
          0.048169892 = score(doc=731,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30993375 = fieldWeight in 731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
      0.33333334 = coord(3/9)
    
    Abstract
    This book offers a comprehensive guide to the world of metadata, from its origins in the ancient cities of the Middle East, to the Semantic Web of today. The author takes us on a journey through the centuries-old history of metadata up to the modern world of crowdsourcing and Google, showing how metadata works and what it is made of. The author explores how it has been used ideologically and how it can never be objective. He argues how central it is to human cultures and the way they develop. Metadata: Shaping Knowledge from Antiquity to the Semantic Web is for all readers with an interest in how we humans organize our knowledge and why this is important. It is suitable for those new to the subject as well as those know its basics. It also makes an excellent introduction for students of information science and librarianship.
    LCSH
    Application software
    Computer applications in arts and humanities
    Subject
    Application software
    Computer applications in arts and humanities
  2. Bénauda, C.-L.; Bordeianu, S.: OCLC's WorldShare Management Services : a brave new world for catalogers (2015) 0.03
    0.034636453 = product of:
      0.10390935 = sum of:
        0.05872617 = weight(_text_:applications in 2617) [ClassicSimilarity], result of:
          0.05872617 = score(doc=2617,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 2617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2617)
        0.016567415 = weight(_text_:of in 2617) [ClassicSimilarity], result of:
          0.016567415 = score(doc=2617,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2704316 = fieldWeight in 2617, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2617)
        0.028615767 = weight(_text_:systems in 2617) [ClassicSimilarity], result of:
          0.028615767 = score(doc=2617,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 2617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2617)
      0.33333334 = coord(3/9)
    
    Abstract
    Like other recent library management systems, OCLC's WorldShare Management Services (WMS) is cloud-based. But unlike the others, WMS opens WorldCat for applications beyond its traditional role as a source of bibliographic records. It enables catalogers to work directly from the Master Record, which no longer needs to be exported to a local system. This article describes the impact of WMS on the roles and functions of cataloging departments, and asks if it is changing the meaning of cataloging. It concludes that while the workflows are changed dramatically, the profession of cataloging remains relevant.
  3. Martin, K.E.; Mundle, K.: Positioning libraries for a new bibliographic universe (2014) 0.03
    0.027272148 = product of:
      0.08181644 = sum of:
        0.050336715 = weight(_text_:applications in 2608) [ClassicSimilarity], result of:
          0.050336715 = score(doc=2608,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 2608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=2608)
        0.015556021 = weight(_text_:of in 2608) [ClassicSimilarity], result of:
          0.015556021 = score(doc=2608,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 2608, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2608)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 2608) [ClassicSimilarity], result of:
              0.031847417 = score(doc=2608,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 2608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2608)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This paper surveys the English-language literature on cataloging and classification published during 2011 and 2012, covering both theory and application. A major theme of the literature centered on Resource Description and Access (RDA), as the period covered in this review includes the conclusion of the RDA test, revisions to RDA, and the implementation decision. Explorations in the theory and practical applications of the Functional Requirements for Bibliographic Records (FRBR), upon which RDA is organized, are also heavily represented. Library involvement with linked data through the creation of prototypes and vocabularies are explored further during the period. Other areas covered in the review include: classification, controlled vocabularies and name authority, evaluation and history of cataloging, special formats cataloging, cataloging and discovery services, non-AACR2/RDA metadata, cataloging workflows, and the education and careers of catalogers.
    Date
    10. 9.2000 17:38:22
  4. Aalberg, T.; Zumer, M.: ¬The value of MARC data, or, challenges of frbrisation (2013) 0.03
    0.026906682 = product of:
      0.080720045 = sum of:
        0.041947264 = weight(_text_:applications in 1769) [ClassicSimilarity], result of:
          0.041947264 = score(doc=1769,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 1769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1769)
        0.018332949 = weight(_text_:of in 1769) [ClassicSimilarity], result of:
          0.018332949 = score(doc=1769,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 1769, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1769)
        0.020439833 = weight(_text_:systems in 1769) [ClassicSimilarity], result of:
          0.020439833 = score(doc=1769,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 1769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1769)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose - Bibliographic records should now be used in innovative end-user applications that enable users to learn about, discover and exploit available content, and this information should be interpreted and reused also beyond the library domain. New conceptual models such as FRBR offer the foundation for such developments. The main motivation for this research is to contribute to the adoption of the FRBR model in future bibliographic standards and systems, by analysing limitations in existing bibliographic information and looking for short- and long-term solutions that can improve the data quality in terms of expressing the FRBR model. Design/methodology/approach - MARC records in three collections (BIBSYS catalogue, Slovenian National Bibliography and BTJ catalogue) were first analysed by looking at statistics of field and subfield usage to determine common patterns that express FRBR. Based on this, different rules for interpreting the information were developed. Finally typical problems/errors found in MARC records were analysed. Findings - Different types of FRBR entity-relationship structures that typically can be found in bibliographic records are identified. Problems related to interpreting these from bibliographic records are analyzed. Frbrisation of consistent and complete MARC records is relatively successful, particularly if all entities are systematically described and relationships among them are clearly indicated. Research limitations/implications - Advanced matching was not used for clustering of identical entities. Practical implications - Cataloguing guidelines are proposed to enable better frbrisation of MARC records in the interim period, before new formats are developed and implemented. Originality/value - This is the first in depth analysis of manifestations embodying several expressions and of works and agents as subjects.
    Source
    Journal of documentation. 69(2013) no.6, S.851-872
  5. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.03
    0.026281446 = product of:
      0.07884434 = sum of:
        0.015876798 = weight(_text_:of in 3373) [ClassicSimilarity], result of:
          0.015876798 = score(doc=3373,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 3373, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.02890629 = weight(_text_:systems in 3373) [ClassicSimilarity], result of:
          0.02890629 = score(doc=3373,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.24009174 = fieldWeight in 3373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.034061253 = weight(_text_:software in 3373) [ClassicSimilarity], result of:
          0.034061253 = score(doc=3373,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
      0.33333334 = coord(3/9)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  6. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.03
    0.025580524 = product of:
      0.07674157 = sum of:
        0.010478153 = weight(_text_:of in 2606) [ClassicSimilarity], result of:
          0.010478153 = score(doc=2606,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17103596 = fieldWeight in 2606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.047685754 = weight(_text_:software in 2606) [ClassicSimilarity], result of:
          0.047685754 = score(doc=2606,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 2606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.018577661 = product of:
          0.037155323 = sum of:
            0.037155323 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.037155323 = score(doc=2606,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  7. Harlow, C.: Data munging tools in Preparation for RDF : Catmandu and LODRefine (2015) 0.03
    0.02546304 = product of:
      0.07638912 = sum of:
        0.041947264 = weight(_text_:applications in 2277) [ClassicSimilarity], result of:
          0.041947264 = score(doc=2277,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 2277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2277)
        0.0140020205 = weight(_text_:of in 2277) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=2277,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 2277, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2277)
        0.020439833 = weight(_text_:systems in 2277) [ClassicSimilarity], result of:
          0.020439833 = score(doc=2277,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 2277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2277)
      0.33333334 = coord(3/9)
    
    Abstract
    Data munging, or the work of remediating, enhancing and transforming library datasets for new or improved uses, has become more important and staff-inclusive in many library technology discussions and projects. Many times we know how we want our data to look, as well as how we want our data to act in discovery interfaces or when exposed, but we are uncertain how to make the data we have into the data we want. This article introduces and compares two library data munging tools that can help: LODRefine (OpenRefine with the DERI RDF Extension) and Catmandu. The strengths and best practices of each tool are discussed in the context of metadata munging use cases for an institution's metadata migration workflow. There is a focus on Linked Open Data modeling and transformation applications of each tool, in particular how metadataists, catalogers, and programmers can create metadata quality reports, enhance existing data with LOD sets, and transform that data to a RDF model. Integration of these tools with other systems and projects, the use of domain specific transformation languages, and the expansion of vocabulary reconciliation services are mentioned.
  8. Breeding, M.: Next-generation discovery : an overview of the European scene (2013) 0.02
    0.022772156 = product of:
      0.1024747 = sum of:
        0.025402876 = weight(_text_:of in 1478) [ClassicSimilarity], result of:
          0.025402876 = score(doc=1478,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.41465375 = fieldWeight in 1478, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1478)
        0.07707182 = weight(_text_:software in 1478) [ClassicSimilarity], result of:
          0.07707182 = score(doc=1478,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.49589399 = fieldWeight in 1478, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1478)
      0.22222222 = coord(2/9)
    
    Abstract
    In this chapter we will provide a brief overview of the features and general characteristics of this new genre of library software, focusing on the products that have been deployed or developed in the United Kingdom and other parts of Europe. Some of these projects include adoption of commercial products from international vendors such as Serials Solutions, EBSCO, Ex Libris, or OCLC and others involve locally-developed software or implementation of open source products.
    Source
    Catalogue 2.0: the future of the library catalogue. Ed. by Sally Chambers
  9. Belpassi, E.: ¬The application software RIMMF : RDA thinking in action (2016) 0.02
    0.021899873 = product of:
      0.09854943 = sum of:
        0.016802425 = weight(_text_:of in 2959) [ClassicSimilarity], result of:
          0.016802425 = score(doc=2959,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2742677 = fieldWeight in 2959, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2959)
        0.08174701 = weight(_text_:software in 2959) [ClassicSimilarity], result of:
          0.08174701 = score(doc=2959,freq=8.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.525975 = fieldWeight in 2959, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=2959)
      0.22222222 = coord(2/9)
    
    Abstract
    RIMMF software is grew out of the need to visualize and realize records according to the RDA guidelines. The article describes the software structure and features in the creation of a r­ball, that is a small database populated by recordings of bibliographic and authority resources enriched by relationships between and among entities involved. At first it's introduced the need that led to RIMMF outcome, then starts the software functional analysis. With a description of the main steps of the r-ball building, emphasizing the issues raised. The results highlights some critical aspects, but above all the wide scope of possible developments that open the Cultural Heritage Institutions horizon to the web prospective. Conclusions display the RDF-linked­data development of the RIMMF incoming future.
  10. Catalogue 2.0 : the future of the library catalogue (2013) 0.02
    0.021317214 = product of:
      0.06395164 = sum of:
        0.03355781 = weight(_text_:applications in 1339) [ClassicSimilarity], result of:
          0.03355781 = score(doc=1339,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.19456528 = fieldWeight in 1339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03125 = fieldNorm(doc=1339)
        0.01404197 = weight(_text_:of in 1339) [ClassicSimilarity], result of:
          0.01404197 = score(doc=1339,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2292085 = fieldWeight in 1339, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1339)
        0.016351866 = weight(_text_:systems in 1339) [ClassicSimilarity], result of:
          0.016351866 = score(doc=1339,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1358164 = fieldWeight in 1339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=1339)
      0.33333334 = coord(3/9)
    
    Abstract
    Will there be a library catalogue in the future and, if so, what will it look like? In the last 25 years, the library catalogue has undergone an evolution, from card catalogues to OPACs, discovery systems and even linked data applications making library bibliographic data accessible on the web. At the same time, users expectations of what catalogues will be able to offer in the way of discovery have never been higher. This groundbreaking edited collection brings together some of the foremost international cataloguing practitioners and thought leaders, including Lorcan Dempsey, Emmanuelle Bermès, Marshall Breeding and Karen Calhoun, to provide an overview of the current state of the art of the library catalogue and look ahead to see what the library catalogue might become. Practical projects and cutting edge concepts are showcased in discussions of linked data and the Semantic Web, user expectations and needs, bibliographic control, the FRBRization of the catalogue, innovations in search and retrieval, next-generation discovery products and mobile catalogues.
    Content
    Foreword - Marshall Breeding Introduction - Sally Chambers 1. Next generation catalogues: what do users think? - Anne Christensen 2. Making search work for the library user - Till Kinstler 3. Next-generation discovery: an overview of the European Scene - Marshall Breeding 4. The mobile library catalogue - Lukas Koster and Driek Heesakkers 5. FRBRizing your catalogue - Rosemie Callewaert 6. Enabling your catalogue for the semantic web - Emmanuelle Bermes 7. Supporting digital scholarship: bibliographic control, library co-operatives and open access repositories - Karen Calhoun 8. Thirteen ways of look at the libraries, discovery and the catalogue: scale, workflow, attention - Lorcan Dempsey.
  11. Biswas, S.: Reflections of Ranganathan's normative principles of cataloging in RDA (2015) 0.02
    0.01898674 = product of:
      0.08544032 = sum of:
        0.05872617 = weight(_text_:applications in 2630) [ClassicSimilarity], result of:
          0.05872617 = score(doc=2630,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 2630, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2630)
        0.026714152 = weight(_text_:of in 2630) [ClassicSimilarity], result of:
          0.026714152 = score(doc=2630,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.43605784 = fieldWeight in 2630, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2630)
      0.22222222 = coord(2/9)
    
    Abstract
    Unlike its predecessor Anglo-American Cataloguing Rules, Second Edition (AACR2), Resource Description and Access (RDA) has incorporated principles and objectives at the beginning of the code. This article is an attempt to make a comparative study between the practical applications of the principles of RDA with that of the Normative Principles of cataloging of S. R. Ranganathan. It is found that the instructions of RDA are much more in compliance with the scientific principles of Ranganathan than the RDA principles recorded at the beginning of the code. The outcome of the study is presented in two different ways. Tabular presentation of the same is made at the beginning followed by analytical studies.
  12. Guerrini, M.; Possemato, T.: From record management to data management : RDA and new application models BIBFRAME, RIMMF, and OliSuite/WeCat (2016) 0.02
    0.01783798 = product of:
      0.080270916 = sum of:
        0.0128330635 = weight(_text_:of in 5120) [ClassicSimilarity], result of:
          0.0128330635 = score(doc=5120,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20947541 = fieldWeight in 5120, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5120)
        0.06743785 = weight(_text_:software in 5120) [ClassicSimilarity], result of:
          0.06743785 = score(doc=5120,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.43390724 = fieldWeight in 5120, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5120)
      0.22222222 = coord(2/9)
    
    Abstract
    The reflection provoked by RDA produced the awareness that the flat format of MARC 21 records is inadequate in expressing the relationships between bibliographic entities that the FRBR model and RDA standard consider fundamental. RIMMF and BIBFRAME indicate to software developers a way to think that is consistent with RDA. In Italy, @Cult, a software house and bibliographic agency working for Casalini Libri, has taken on the charge of following and facilitating the transition: OliSuite/WeCat provides an implementation of RDA that integrates vocabularies and ontologies already present in the Web by structuring the information in linked open data.
  13. Gu, B.: ISBD in China : the road to internationalization (2014) 0.02
    0.017575702 = product of:
      0.079090655 = sum of:
        0.06711562 = weight(_text_:applications in 2000) [ClassicSimilarity], result of:
          0.06711562 = score(doc=2000,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 2000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=2000)
        0.011975031 = weight(_text_:of in 2000) [ClassicSimilarity], result of:
          0.011975031 = score(doc=2000,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19546966 = fieldWeight in 2000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2000)
      0.22222222 = coord(2/9)
    
    Abstract
    The article discusses the historical background, present status, and future perspectives of International Standard for Bibliographic Description (ISBD) translations, research, and applications in China. It also analyzes the relationship between ISBD and Chinese Library Cataloging Rules and the internationalization process of Chinese library cataloging practices.
  14. McGrath, K.; Kules, B.; Fitzpatrick, C.: FRBR and facets provide flexible, work-centric access to items in library collections (2011) 0.02
    0.015902052 = product of:
      0.071559235 = sum of:
        0.05872617 = weight(_text_:applications in 2430) [ClassicSimilarity], result of:
          0.05872617 = score(doc=2430,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 2430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2430)
        0.0128330635 = weight(_text_:of in 2430) [ClassicSimilarity], result of:
          0.0128330635 = score(doc=2430,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20947541 = fieldWeight in 2430, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2430)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper explores a technique to improve searcher access to library collections by providing a faceted search interface built on a data model based on the Functional Requirements for Bibliographic Records (FRBR). The prototype provides a Workcentric view of a moving image collection that is integrated with bibliographic and holdings data. Two sets of facets address important user needs: "what do you want?" and "how/where do you want it?" enabling patrons to narrow, broaden and pivot across facet values instead of limiting them to the tree-structured hierarchy common with existing FRBR applications. The data model illustrates how FRBR is being adapted and applied beyond the traditional library catalog.
  15. Taniguchi, S.: Is BIBFRAME 2.0 a suitable schema for exchanging and sharing diverse descriptive metadata about bibliographic resources? (2018) 0.01
    0.0143071795 = product of:
      0.06438231 = sum of:
        0.014818345 = weight(_text_:of in 5165) [ClassicSimilarity], result of:
          0.014818345 = score(doc=5165,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 5165, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5165)
        0.049563963 = weight(_text_:systems in 5165) [ClassicSimilarity], result of:
          0.049563963 = score(doc=5165,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.41167158 = fieldWeight in 5165, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5165)
      0.22222222 = coord(2/9)
    
    Abstract
    Knowledge organization systems have been studied in several fields and for different and complementary aspects. Among the aspects that concentrate common interests, in this article we highlight those related to the terminological and conceptual relationships among the components of any knowledge organization system. This research aims to contribute to the critical analysis of knowledge organization systems, especially ontologies, thesauri, and classification systems, by the comprehension of its similarities and differences when dealing with concepts and their ways of relating to each other as well as to the conceptual design that is adopted.
  16. Putz, M.; Schaffner, V.; Seidler, W.: FRBR: The MAB2 Perspective (2012) 0.01
    0.014278482 = product of:
      0.06425317 = sum of:
        0.016567415 = weight(_text_:of in 1909) [ClassicSimilarity], result of:
          0.016567415 = score(doc=1909,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2704316 = fieldWeight in 1909, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1909)
        0.047685754 = weight(_text_:software in 1909) [ClassicSimilarity], result of:
          0.047685754 = score(doc=1909,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 1909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1909)
      0.22222222 = coord(2/9)
    
    Abstract
    FRBRizing legacy data has been a subject to research since the FRBR model was published in 1998. Studies were mainly conducted for MARC21, but in Austria MAB2, a data format based on the rules for descriptive cataloguing in academic libraries, mainly in Germany and Austria, is still in use. The implementation of Primo, an Ex Libris software, made research in FRBRizing MAB2 records necessary as Primo offers the possibility of building FRBR-groups by clustering different manifestations of a work. The first steps of FRBRizing bibliographic records in MAB2 at the Vienna University Library and the challenges in this context are highlighted in this paper.
    Content
    Contribution to a special issue "The FRBR family of conceptual models: toward a linked future"
  17. Morse, T.: Mapping relationships : examining bibliographic relationships in sheet maps from Tillett to RDA (2012) 0.01
    0.013026112 = product of:
      0.058617502 = sum of:
        0.018148692 = weight(_text_:of in 1896) [ClassicSimilarity], result of:
          0.018148692 = score(doc=1896,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.29624295 = fieldWeight in 1896, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1896)
        0.04046881 = weight(_text_:systems in 1896) [ClassicSimilarity], result of:
          0.04046881 = score(doc=1896,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.33612844 = fieldWeight in 1896, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1896)
      0.22222222 = coord(2/9)
    
    Abstract
    This study presents a qualitative examination of the applicability of several taxonomies of bibliographic relationships to sheet maps. Examples of relationships between sheet maps are identified and typed using the systems developed by Tillett and Smiraglia and the taxonomy of relationships described in the Functional Requirements for Bibliographic Records (FRBR) conceptual model and in Resource Description and Access (RDA). This process reveals that while many of the relationship categories in these systems apply well to sheet maps, some are not applicable at all while others may apply with some redefinition.
  18. Juola, P.; Mikros, G.K.; Vinsick, S.: ¬A comparative assessment of the difficulty of authorship attribution in Greek and in English (2019) 0.01
    0.012849791 = product of:
      0.05782406 = sum of:
        0.041947264 = weight(_text_:applications in 4676) [ClassicSimilarity], result of:
          0.041947264 = score(doc=4676,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 4676, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4676)
        0.015876798 = weight(_text_:of in 4676) [ClassicSimilarity], result of:
          0.015876798 = score(doc=4676,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 4676, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4676)
      0.22222222 = coord(2/9)
    
    Abstract
    Authorship attribution is an important problem in text classification, with many applications and a substantial body of research activity. Among the research findings are that many different methods will work, including a number of methods that are superficially language-independent (such as an analysis of the most common "words" or "character n-grams" in a document). Since all languages have words (and all written languages have characters), this method could (in theory) work on any language. However, it is not clear that the methods that work best on, for example English, would also work best on other languages. It is not even clear that the same level of performance is achievable in different languages, even under identical conditions. Unfortunately, it is very difficult to achieve "identical conditions" in practice. A new corpus, developed by George Mikros, provides very tight controls not only for author but also for topic, thus enabling a direct comparison of performance levels between the two languages Greek and English. We compare a number of different methods head-to-head on this corpus, and show that, overall, performance on English is higher than performance on Greek, often highly significantly so.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.1, S.61-70
  19. Potha, N.; Stamatatos, E.: Improving author verification based on topic modeling (2019) 0.01
    0.012849791 = product of:
      0.05782406 = sum of:
        0.041947264 = weight(_text_:applications in 5385) [ClassicSimilarity], result of:
          0.041947264 = score(doc=5385,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 5385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5385)
        0.015876798 = weight(_text_:of in 5385) [ClassicSimilarity], result of:
          0.015876798 = score(doc=5385,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 5385, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5385)
      0.22222222 = coord(2/9)
    
    Abstract
    Authorship analysis attempts to reveal information about authors of digital documents enabling applications in digital humanities, text forensics, and cyber-security. Author verification is a fundamental task where, given a set of texts written by a certain author, we should decide whether another text is also by that author. In this article we systematically study the usefulness of topic modeling in author verification. We examine several author verification methods that cover the main paradigms, namely, intrinsic (attempt to solve a one-class classification task) and extrinsic (attempt to solve a binary classification task) methods as well as profile-based (all documents of known authorship are treated cumulatively) and instance-based (each document of known authorship is treated separately) approaches combined with well-known topic modeling methods such as Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA). We use benchmark data sets and demonstrate that LDA is better combined with extrinsic methods, while the most effective intrinsic method is based on LSI. Moreover, topic modeling seems to be particularly effective for profile-based approaches and the performance is enhanced when latent topics are extracted by an enriched set of documents. The comparison to state-of-the-art methods demonstrates the great potential of the approaches presented in this study. It is also demonstrates that even when genre-agnostic external documents are used, the proposed extrinsic models are very competitive.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.10, S.1074-1088
  20. Mitchell, A.M.; Thompson, J.M.; Wu, A.: Agile cataloging : staffing and skills for a bibliographic future (2010) 0.01
    0.012596454 = product of:
      0.05668404 = sum of:
        0.014200641 = weight(_text_:of in 4159) [ClassicSimilarity], result of:
          0.014200641 = score(doc=4159,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 4159, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4159)
        0.042483397 = weight(_text_:systems in 4159) [ClassicSimilarity], result of:
          0.042483397 = score(doc=4159,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.35286134 = fieldWeight in 4159, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4159)
      0.22222222 = coord(2/9)
    
    Abstract
    One of the foremost challenges facing technical services in academic libraries is integrating digital resources and services with existing work without a concomitant personnel expansion. The library's bibliographic data are manipulated and delivered through myriad systems and services, including proxy servers, electronic resource management systems, federated search and link resolver tools, integrated library systems, bibliographic utilities, and dozens of external data providers. In this increasingly complex environment, libraries require flexible data management and flexible staffing, which in turn relies on a reservoir of informed staff and managers who understand the many pieces of the technical services puzzle. This article discusses efforts at the University of Houston Libraries, a mid-size research library, to enhance organizational capacity for evolving cataloging roles and to foster organizational relationships that support progress in technical services functions.

Authors

Languages

  • e 286
  • d 11
  • i 5
  • f 1
  • More… Less…

Types

  • a 271
  • el 45
  • m 18
  • n 5
  • b 4
  • ag 2
  • r 2
  • s 2
  • x 1
  • More… Less…

Subjects