Search (371 results, page 2 of 19)

  • × theme_ss:"Formalerschließung"
  • × year_i:[2010 TO 2020}
  1. Lee, S.; Jacob, E.K.: ¬An integrated approach to metadata interoperability : construction of a conceptual structure between MARC and FRBR (2011) 0.02
    0.018745132 = product of:
      0.028117698 = sum of:
        0.009753809 = weight(_text_:a in 302) [ClassicSimilarity], result of:
          0.009753809 = score(doc=302,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18723148 = fieldWeight in 302, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=302)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 302) [ClassicSimilarity], result of:
              0.03672778 = score(doc=302,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=302)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Machine-Readable Cataloging (MARC) is currently the most broadly used bibliographic standard for encoding and exchanging bibliographic data. However, MARC may not fully support representation of the dynamic nature and semantics of digital resources because of its rigid and single-layered linear structure. The Functional Requirements for Bibliographic Records (FRBR) model, which is designed to overcome the problems of MARC, does not provide sufficient data elements and adopts a predetermined hierarchy. A flexible structure for bibliographic data with detailed data elements is needed. Integrating MARC format with the hierarchical structure of FRBR is one approach to meet this need. The purpose of this research is to propose an approach that can facilitate interoperability between MARC and FRBR by providing a conceptual structure that can function as a mediator between MARC data elements and FRBR attributes.
    Date
    10. 9.2000 17:38:22
    Type
    a
  2. Knowlton, S.A.: Power and change in the US cataloging community (2014) 0.02
    0.018662978 = product of:
      0.027994465 = sum of:
        0.0065699257 = weight(_text_:a in 2599) [ClassicSimilarity], result of:
          0.0065699257 = score(doc=2599,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12611452 = fieldWeight in 2599, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2599)
        0.02142454 = product of:
          0.04284908 = sum of:
            0.04284908 = weight(_text_:22 in 2599) [ClassicSimilarity], result of:
              0.04284908 = score(doc=2599,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.2708308 = fieldWeight in 2599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2599)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The US cataloging community is an interorganizational network with the Library of Congress (LC) as the lead organization, which reserves to itself the power to shape cataloging rules. Peripheral members of the network who are interested in modifying changes to the rules or to the network can use various strategies for organizational change that incorporate building ties to the decision-makers located at the hub of the network. The story of William E. Studwell's campaign for a subject heading code illustrates how some traditional scholarly methods of urging change-papers and presentations-are insufficient to achieve reform in an interorganizational network, absent strategies to build alliances with the decision makers.
    Date
    10. 9.2000 17:38:22
    Type
    a
  3. Mugridge, R.L.; Edmunds, J.: Batchloading MARC bibliographic records (2012) 0.02
    0.018662978 = product of:
      0.027994465 = sum of:
        0.0065699257 = weight(_text_:a in 2600) [ClassicSimilarity], result of:
          0.0065699257 = score(doc=2600,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12611452 = fieldWeight in 2600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2600)
        0.02142454 = product of:
          0.04284908 = sum of:
            0.04284908 = weight(_text_:22 in 2600) [ClassicSimilarity], result of:
              0.04284908 = score(doc=2600,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.2708308 = fieldWeight in 2600, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2600)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Research libraries are using batchloading to provide access to many resources that they would otherwise be unable to catalog given the staff and other resources available. To explore how such libraries are managing their batchloading activities, the authors conducted a survey of the Association for Library Collections and Technical Services Directors of Large Research Libraries Interest Group member libraries. The survey addressed staffing, budgets, scope, workflow, management, quality standards, information technology support, collaborative efforts, and assessment of batchloading activities. The authors provide an analysis of the survey results along with suggestions for process improvements and future research.
    Date
    10. 9.2000 17:38:22
    Type
    a
  4. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.02
    0.018662978 = product of:
      0.027994465 = sum of:
        0.0065699257 = weight(_text_:a in 2606) [ClassicSimilarity], result of:
          0.0065699257 = score(doc=2606,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12611452 = fieldWeight in 2606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.02142454 = product of:
          0.04284908 = sum of:
            0.04284908 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.04284908 = score(doc=2606,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
    Type
    a
  5. D'Angelo, C.A.; Giuffrida, C.; Abramo, G.: ¬A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments (2011) 0.02
    0.018178573 = product of:
      0.027267858 = sum of:
        0.00890397 = weight(_text_:a in 4190) [ClassicSimilarity], result of:
          0.00890397 = score(doc=4190,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1709182 = fieldWeight in 4190, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
              0.03672778 = score(doc=4190,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because of almost overwhelming difficulties in identifying the true author of each publication. We will address this problem by presenting a heuristic approach to author name disambiguation in bibliometric datasets for large-scale research assessments. The application proposed concerns the Italian university system, comprising 80 universities and a research staff of over 60,000 scientists. The key advantage of the proposed approach is the ease of implementation. The algorithms are of practical application and have considerably better scalability and expandability properties than state-of-the-art unsupervised approaches. Moreover, the performance in terms of precision and recall, which can be further improved, seems thoroughly adequate for the typical needs of large-scale bibliometric research assessments.
    Date
    22. 1.2011 13:06:52
    Type
    a
  6. O'Neill, E.; Zumer, M.; Mixter, J.: FRBR aggregates : their types and frequency in library collections (2015) 0.02
    0.018178573 = product of:
      0.027267858 = sum of:
        0.00890397 = weight(_text_:a in 2610) [ClassicSimilarity], result of:
          0.00890397 = score(doc=2610,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1709182 = fieldWeight in 2610, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2610)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 2610) [ClassicSimilarity], result of:
              0.03672778 = score(doc=2610,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 2610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2610)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Aggregates have been a frequent topic of discussion between library science researchers. This study seeks to better understand aggregates through the analysis of a sample of bibliographic records and review of the cataloging treatment of aggregates. The study focuses on determining how common aggregates are in library collections, what types of aggregates exist, how aggregates are described in bibliographic records, and the criteria for identifying aggregates from the information in bibliographic records. A sample of bibliographic records representing textual resources was taken from OCLC's WorldCat database. More than 20 percent of the sampled records represented aggregates and more works were embodied in aggregates than were embodied in single work manifestations. A variety of issues, including cataloging practices and the varying definitions of aggregates, made it difficult to accurately identify and quantify the presence of aggregates using only the information from bibliographic records.
    Date
    10. 9.2000 17:38:22
    Type
    a
  7. RDA Toolkit (1) (2017) 0.02
    0.01757718 = product of:
      0.02636577 = sum of:
        0.0033183133 = weight(_text_:a in 3996) [ClassicSimilarity], result of:
          0.0033183133 = score(doc=3996,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.06369744 = fieldWeight in 3996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3996)
        0.023047457 = product of:
          0.046094913 = sum of:
            0.046094913 = weight(_text_:de in 3996) [ClassicSimilarity], result of:
              0.046094913 = score(doc=3996,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23740499 = fieldWeight in 3996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3996)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Am 14. Februar 2017 ist das neue Release des RDA Toolkits<http://www.rdatoolkit.org/development/February2017release> erschienen. Das Release enthält im englischen Text alle Änderungen aus dem Fast-Track-Verfahren RSC/Sec/6<http://www.rda-rsc.org/sites/all/files/RSC-Sec-6.pdf>. Ebenso enthalten sind die Updates der LC-PCC PS (Library of Congress-Program for Cooperative Cataloging Policy Statements) und der MLA Best Practices. Neu aufgenommen wurden die Policy Statements der Library and Archives Canada in Kooperation mit der Bibliothèque et Archives nationales du Québec. Diese sind sowohl im Register als auch im Text über ein Icon (dunkel violett) mit den Buchstaben LAC/BAC-BAnQ ansteuerbar. Ab sofort ist es möglich sich auch die Policy Statements in einer zweisprachigen Ansicht anzeigen zu lassen, dazu wurde die Funktion Select Language und Dual View in der Symbolliste unter dem Reiter "Ressourcen" eingefügt. Im deutschen Text wurden ausschließlich Änderungen an den Anwendungsrichtlinien für den deutschsprachigen Raum (D-A-CH) eingearbeitet. Die dazugehörige Übersicht (Kurz- bzw. Langversion) finden Sie im RDA-Info-Wiki<https://wiki.dnb.de/x/1hLSBg>. Mitte April 2017 wird das nächste Release des RDA Toolkit erscheinen, eingearbeitet werden die verabschiedeten Proposals, die im November 2016 vom RDA Steering Committee (RSC) beschlossen wurden. Die Umsetzung der Änderungen in der deutschen Übersetzung aus den Fast-Track-Dokumenten RSC/Sec/4 und RSC/Sec/5 und RSC/Sec/6 sind für den August 2017 geplant.
    Content
    Vgl. auch: http://www.aspb.de/de/rda/.
  8. Eversberg, B.: Zum Thema "Migration" - Beispiel USA (2018) 0.02
    0.01757718 = product of:
      0.02636577 = sum of:
        0.0033183133 = weight(_text_:a in 4386) [ClassicSimilarity], result of:
          0.0033183133 = score(doc=4386,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.06369744 = fieldWeight in 4386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4386)
        0.023047457 = product of:
          0.046094913 = sum of:
            0.046094913 = weight(_text_:de in 4386) [ClassicSimilarity], result of:
              0.046094913 = score(doc=4386,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23740499 = fieldWeight in 4386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4386)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Zu den Systemen KOHA und FOLIO gibt es folgende aktuelle Demos, die man mit allen Funktionen ausprobieren kann: KOHA Komplette Demo-Anwendung von Bywater Solutions: https://bywatersolutions.com/koha-demo user = bywater / password = bywater Empfohlen: Cataloguing, mit den MARC-Formularen und Direkt-Datenabruf per Z39 FOLIO (GBV: "The Next-Generation Library System") Demo: https://folio-demo.gbv.de/ user = diku_admin / password = admin Empfohlen: "Inventory" und dann Button "New" zum Katalogisieren Dann "Title Data" für neuen Datensatz. Das ist wohl aber noch in einem Beta-Zustand. Ferner: FOLIO-Präsentation Göttingen April 2018: https://www.zbw-mediatalk.eu/de/2018/05/folio-info-day-a-look-at-the-next-generation-library-system/
  9. Devaul, H.; Diekema, A.R.; Ostwald, J.: Computer-assisted assignment of educational standards using natural language processing (2011) 0.02
    0.017551895 = product of:
      0.026327841 = sum of:
        0.007963953 = weight(_text_:a in 4199) [ClassicSimilarity], result of:
          0.007963953 = score(doc=4199,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 4199, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4199)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
              0.03672778 = score(doc=4199,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 4199, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4199)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Educational standards are a central focus of the current educational system in the United States, underpinning educational practice, curriculum design, teacher professional development, and high-stakes testing and assessment. Digital library users have requested that this information be accessible in association with digital learning resources to support teaching and learning as well as accountability requirements. Providing this information is complex because of the variability and number of standards documents in use at the national, state, and local level. This article describes a cataloging tool that aids catalogers in the assignment of standards metadata to digital library resources, using natural language processing techniques. The research explores whether the standards suggestor service would suggest the same standards as a human, whether relevant standards are ranked appropriately in the result set, and whether the relevance of the suggested assignments improve when, in addition to resource content, metadata is included in the query to the cataloging tool. The article also discusses how this service might streamline the cataloging workflow.
    Date
    22. 1.2011 14:25:32
    Type
    a
  10. DeZelar-Tiedman, C.: Exploring user-contributed metadata's potential to enhance access to literary works (2011) 0.02
    0.017551895 = product of:
      0.026327841 = sum of:
        0.007963953 = weight(_text_:a in 2595) [ClassicSimilarity], result of:
          0.007963953 = score(doc=2595,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 2595, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2595)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 2595) [ClassicSimilarity], result of:
              0.03672778 = score(doc=2595,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 2595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2595)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Academic libraries have moved toward providing social networking features, such as tagging, in their library catalogs. To explore whether user tags can enhance access to individual literary works, the author obtained a sample of individual works of English and American literature from the twentieth and twenty-first centuries from a large academic library catalog and searched them in LibraryThing. The author compared match rates, the availability of subject headings and tags across various literary forms, and the terminology used in tags versus controlled-vocabulary headings on a subset of records. In addition, she evaluated the usefulness of available LibraryThing tags for the library catalog records that lacked subject headings. Options for utilizing the subject terms available in sources outside the local catalog also are discussed.
    Date
    10. 9.2000 17:38:22
    Type
    a
  11. Sapon-White, R.: E-book cataloging workflows at Oregon State University (2014) 0.02
    0.017551895 = product of:
      0.026327841 = sum of:
        0.007963953 = weight(_text_:a in 2604) [ClassicSimilarity], result of:
          0.007963953 = score(doc=2604,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 2604, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2604)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 2604) [ClassicSimilarity], result of:
              0.03672778 = score(doc=2604,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 2604, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2604)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Among the many issues associated with integrating e-books into library collections and services, the revision of existing workflows in cataloging units has received little attention. The experience designing new workflows for e-books at Oregon State University Libraries since 2008 is described in detail from the perspective of three different sources of e-books. These descriptions highlight where the workflows applied to each vendor's stream differ. A workflow was developed for each vendor, based on the quality and source of available bibliographic records and the staff member performing the task. Involving cataloging staff as early as possible in the process of purchasing e-books from a new vendor ensures that a suitable workflow can be designed and implemented as soon as possible. This ensures that the representation of e-books in the library catalog is not delayed, increasing the likelihood that users will readily find and use these resources that the library has purchased.
    Date
    10. 9.2000 17:38:22
    Type
    a
  12. Chambers, S.; Myall, C.: Cataloging and classification : review of the literature 2007-8 (2010) 0.02
    0.017380118 = product of:
      0.026070178 = sum of:
        0.0046456386 = weight(_text_:a in 4309) [ClassicSimilarity], result of:
          0.0046456386 = score(doc=4309,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.089176424 = fieldWeight in 4309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4309)
        0.02142454 = product of:
          0.04284908 = sum of:
            0.04284908 = weight(_text_:22 in 4309) [ClassicSimilarity], result of:
              0.04284908 = score(doc=4309,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.2708308 = fieldWeight in 4309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4309)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    10. 9.2000 17:38:22
    Type
    a
  13. Stalberg, E.; Cronin, C.: Assessing the cost and value of bibliographic control (2011) 0.02
    0.017380118 = product of:
      0.026070178 = sum of:
        0.0046456386 = weight(_text_:a in 2592) [ClassicSimilarity], result of:
          0.0046456386 = score(doc=2592,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.089176424 = fieldWeight in 2592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2592)
        0.02142454 = product of:
          0.04284908 = sum of:
            0.04284908 = weight(_text_:22 in 2592) [ClassicSimilarity], result of:
              0.04284908 = score(doc=2592,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.2708308 = fieldWeight in 2592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2592)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    10. 9.2000 17:38:22
    Type
    a
  14. Martin, K.E.; Mundle, K.: Positioning libraries for a new bibliographic universe (2014) 0.02
    0.016840585 = product of:
      0.025260875 = sum of:
        0.006896985 = weight(_text_:a in 2608) [ClassicSimilarity], result of:
          0.006896985 = score(doc=2608,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.13239266 = fieldWeight in 2608, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2608)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 2608) [ClassicSimilarity], result of:
              0.03672778 = score(doc=2608,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 2608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2608)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper surveys the English-language literature on cataloging and classification published during 2011 and 2012, covering both theory and application. A major theme of the literature centered on Resource Description and Access (RDA), as the period covered in this review includes the conclusion of the RDA test, revisions to RDA, and the implementation decision. Explorations in the theory and practical applications of the Functional Requirements for Bibliographic Records (FRBR), upon which RDA is organized, are also heavily represented. Library involvement with linked data through the creation of prototypes and vocabularies are explored further during the period. Other areas covered in the review include: classification, controlled vocabularies and name authority, evaluation and history of cataloging, special formats cataloging, cataloging and discovery services, non-AACR2/RDA metadata, cataloging workflows, and the education and careers of catalogers.
    Date
    10. 9.2000 17:38:22
    Type
    a
  15. Delsey, T.: ¬The Making of RDA (2016) 0.02
    0.016840585 = product of:
      0.025260875 = sum of:
        0.006896985 = weight(_text_:a in 2946) [ClassicSimilarity], result of:
          0.006896985 = score(doc=2946,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.13239266 = fieldWeight in 2946, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2946)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.03672778 = score(doc=2946,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
    Type
    a
  16. Normore, L.F.: "Here be dragons" : a wayfinding approach to teaching cataloguing (2012) 0.02
    0.016055118 = product of:
      0.024082676 = sum of:
        0.008779433 = weight(_text_:a in 1903) [ClassicSimilarity], result of:
          0.008779433 = score(doc=1903,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1685276 = fieldWeight in 1903, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1903)
        0.015303242 = product of:
          0.030606484 = sum of:
            0.030606484 = weight(_text_:22 in 1903) [ClassicSimilarity], result of:
              0.030606484 = score(doc=1903,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.19345059 = fieldWeight in 1903, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1903)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Teaching cataloguing requires the instructor to make strategic decisions about how to approach the variety and complexity of the field and to provide an adequate theoretical foundation while preparing students for their entry into the world of practice. Accompanying these challenges are the tactical demands of providing this instruction in a distance education environment. Rather than focusing on ways to support learners in catalogue record production, instructors may use a problem solving and decision making approach to instruction. In this paper, a way to conceptualize a decision making approach that builds on a foundation provided by theories of information navigation is described. This approach, which is called "wayfinding", teaches by having students learn to find their way in the sets of rules that are commonly used. The method focuses on instruction about the structural features of rule sets, providing basic definitions of what each of the "places" in the rule sets contain (e.g., "formatting personal names" in Chapter 22 of AACR2R) and about ways to navigate those structures, enabling students to learn not only about common rules but also about less well known cataloguing practices ("dragons"). It provides both pragmatic and pedagogical benefits and helps develop links between cataloguing practices and their theoretical foundations.
    Type
    a
  17. Coyle, K.: FRBR, before and after : a look at our bibliographic models (2016) 0.02
    0.015620945 = product of:
      0.023431417 = sum of:
        0.008128175 = weight(_text_:a in 2786) [ClassicSimilarity], result of:
          0.008128175 = score(doc=2786,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15602624 = fieldWeight in 2786, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2786)
        0.015303242 = product of:
          0.030606484 = sum of:
            0.030606484 = weight(_text_:22 in 2786) [ClassicSimilarity], result of:
              0.030606484 = score(doc=2786,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.19345059 = fieldWeight in 2786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2786)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book looks at the ways that we define the things of the bibliographic world, and in particular how our bibliographic models reflect our technology and the assumed goals of libraries. There is, of course, a history behind this, as well as a present and a future. The first part of the book begins by looking at the concept of the 'work' in library cataloging theory, and how that concept has evolved since the mid-nineteenth century to date. Next it talks about models and technology, two areas that need to be understood before taking a long look at where we are today. It then examines the new bibliographic model called Functional Requirements for Bibliographic Records (FRBR) and the technical and social goals that the FRBR Study Group was tasked to address. The FRBR entities are analyzed in some detail. Finally, FRBR as an entity-relation model is compared to a small set of Semantic Web vocabularies that can be seen as variants of the multi-entity bibliographic model that FRBR introduced.
    Date
    12. 2.2016 16:22:58
  18. Savoy, J.: Estimating the probability of an authorship attribution (2016) 0.02
    0.015620945 = product of:
      0.023431417 = sum of:
        0.008128175 = weight(_text_:a in 2937) [ClassicSimilarity], result of:
          0.008128175 = score(doc=2937,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15602624 = fieldWeight in 2937, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2937)
        0.015303242 = product of:
          0.030606484 = sum of:
            0.030606484 = weight(_text_:22 in 2937) [ClassicSimilarity], result of:
              0.030606484 = score(doc=2937,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.19345059 = fieldWeight in 2937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2937)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In authorship attribution, various distance-based metrics have been proposed to determine the most probable author of a disputed text. In this paradigm, a distance is computed between each author profile and the query text. These values are then employed only to rank the possible authors. In this article, we analyze their distribution and show that we can model it as a mixture of 2 Beta distributions. Based on this finding, we demonstrate how we can derive a more accurate probability that the closest author is, in fact, the real author. To evaluate this approach, we have chosen 4 authorship attribution methods (Burrows' Delta, Kullback-Leibler divergence, Labbé's intertextual distance, and the naïve Bayes). As the first test collection, we have downloaded 224 State of the Union addresses (from 1790 to 2014) delivered by 41 U.S. presidents. The second test collection is formed by the Federalist Papers. The evaluations indicate that the accuracy rate of some authorship decisions can be improved. The suggested method can signal that the proposed assignment should be interpreted as possible, without strong certainty. Being able to quantify the certainty associated with an authorship decision can be a useful component when important decisions must be taken.
    Date
    7. 5.2016 21:22:27
    Type
    a
  19. Baga, J.; Hoover, L.; Wolverton, R.E.: Online, practical, and free cataloging resources (2013) 0.01
    0.014897244 = product of:
      0.022345865 = sum of:
        0.0039819763 = weight(_text_:a in 2603) [ClassicSimilarity], result of:
          0.0039819763 = score(doc=2603,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.07643694 = fieldWeight in 2603, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2603)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 2603) [ClassicSimilarity], result of:
              0.03672778 = score(doc=2603,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 2603, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2603)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    10. 9.2000 17:38:22
    Type
    a
  20. RDA Toolkit (4) : Dezember 2017 (2017) 0.01
    0.014794806 = product of:
      0.022192208 = sum of:
        0.0037542433 = weight(_text_:a in 4283) [ClassicSimilarity], result of:
          0.0037542433 = score(doc=4283,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.072065435 = fieldWeight in 4283, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4283)
        0.018437965 = product of:
          0.03687593 = sum of:
            0.03687593 = weight(_text_:de in 4283) [ClassicSimilarity], result of:
              0.03687593 = score(doc=4283,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.18992399 = fieldWeight in 4283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4283)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Am 12. Dezember 2017 ist das neue Release des RDA Toolkits erschienen. Dabei gab es, aufgrund des 3R-Projekts (RDA Toolkit Restruction and Redesign Project), keine inhaltlichen Änderungen am RDA-Text. Es wurden ausschließlich die Übersetzungen in finnischer und französischer Sprache, ebenso wie die dazugehörigen Policy statements, aktualisiert. Für den deutschsprachigen Raum wurden in der Übersetzung zwei Beziehungskennzeichnungen geändert: Im Anhang I.2.2 wurde die Änderung von "Sponsor" zu "Träger" wieder rückgängig gemacht. In Anhang K.2.3 wurde "Sponsor" zu "Person als Sponsor" geändert. Außerdem wurde die Übersetzung der Anwendungsrichtlinien (D-A-CH AWR) ins Französische aktualisiert. Dies ist das vorletzte Release vor dem Rollout des neuen Toolkits. Das letzte Release im Januar/Februar 2018 wird die norwegische Übersetzung enthalten. Im Juni 2018 wird das RDA Toolkit ein Relaunch erfahren und mit einer neuen Oberfläche erscheinen. Dieser beinhaltet ein Redesign der Toolkit-Oberfläche und die inhaltliche Anpassung des Standards RDA an das Library Reference Model (IFLA LRM) sowie die künftige stärkere Ausrichtung auf die aktuellen technischen Möglichkeiten. Zunächst wird im Juni 2018 die englische Originalausgabe der RDA in der neuen Form erscheinen. Alle Übersetzungen werden in einer Übergangszeit angepasst. Hierfür wird die alte Version des RDA Toolkit für ein weiteres Jahr zur Verfügung gestellt. Der Stand Dezember 2017 der deutschen Ausgabe und die D-A-CH-Anwendungsrichtlinien bleiben bis zur Anpassung eingefroren. Nähere Information zum Rollout finden Sie unter dem folgenden Link<http://www.rdatoolkit.org/3Rproject/SR3>. [Inetbib vom 13.12.2017]
    Content
    Vgl. auch: http://www.aspb.de/de/rda/.

Languages

  • e 294
  • d 65
  • i 5
  • f 1
  • More… Less…

Types

  • a 331
  • el 67
  • m 20
  • n 8
  • b 4
  • ag 2
  • s 2
  • r 1
  • More… Less…

Subjects