Search (115 results, page 1 of 6)

  • × theme_ss:"Formalerschließung"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.05
    0.053670634 = product of:
      0.10734127 = sum of:
        0.016039573 = weight(_text_:information in 2606) [ClassicSimilarity], result of:
          0.016039573 = score(doc=2606,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.1920054 = fieldWeight in 2606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.091301695 = sum of:
          0.046170477 = weight(_text_:technology in 2606) [ClassicSimilarity], result of:
            0.046170477 = score(doc=2606,freq=4.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.32576108 = fieldWeight in 2606, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
          0.04513122 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
            0.04513122 = score(doc=2606,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.2708308 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
      0.5 = coord(2/4)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  2. Mugridge, R.L.; Edmunds, J.: Batchloading MARC bibliographic records (2012) 0.04
    0.044560187 = product of:
      0.08912037 = sum of:
        0.011341691 = weight(_text_:information in 2600) [ClassicSimilarity], result of:
          0.011341691 = score(doc=2600,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13576832 = fieldWeight in 2600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2600)
        0.07777868 = sum of:
          0.032647457 = weight(_text_:technology in 2600) [ClassicSimilarity], result of:
            0.032647457 = score(doc=2600,freq=2.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.23034787 = fieldWeight in 2600, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2600)
          0.04513122 = weight(_text_:22 in 2600) [ClassicSimilarity], result of:
            0.04513122 = score(doc=2600,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.2708308 = fieldWeight in 2600, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2600)
      0.5 = coord(2/4)
    
    Abstract
    Research libraries are using batchloading to provide access to many resources that they would otherwise be unable to catalog given the staff and other resources available. To explore how such libraries are managing their batchloading activities, the authors conducted a survey of the Association for Library Collections and Technical Services Directors of Large Research Libraries Interest Group member libraries. The survey addressed staffing, budgets, scope, workflow, management, quality standards, information technology support, collaborative efforts, and assessment of batchloading activities. The authors provide an analysis of the survey results along with suggestions for process improvements and future research.
    Date
    10. 9.2000 17:38:22
  3. Devaul, H.; Diekema, A.R.; Ostwald, J.: Computer-assisted assignment of educational standards using natural language processing (2011) 0.04
    0.04175274 = product of:
      0.08350548 = sum of:
        0.016838044 = weight(_text_:information in 4199) [ClassicSimilarity], result of:
          0.016838044 = score(doc=4199,freq=6.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.20156369 = fieldWeight in 4199, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4199)
        0.06666744 = sum of:
          0.027983533 = weight(_text_:technology in 4199) [ClassicSimilarity], result of:
            0.027983533 = score(doc=4199,freq=2.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.19744103 = fieldWeight in 4199, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=4199)
          0.038683902 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
            0.038683902 = score(doc=4199,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.23214069 = fieldWeight in 4199, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4199)
      0.5 = coord(2/4)
    
    Abstract
    Educational standards are a central focus of the current educational system in the United States, underpinning educational practice, curriculum design, teacher professional development, and high-stakes testing and assessment. Digital library users have requested that this information be accessible in association with digital learning resources to support teaching and learning as well as accountability requirements. Providing this information is complex because of the variability and number of standards documents in use at the national, state, and local level. This article describes a cataloging tool that aids catalogers in the assignment of standards metadata to digital library resources, using natural language processing techniques. The research explores whether the standards suggestor service would suggest the same standards as a human, whether relevant standards are ranked appropriately in the result set, and whether the relevance of the suggested assignments improve when, in addition to resource content, metadata is included in the query to the cataloging tool. The article also discusses how this service might streamline the cataloging workflow.
    Date
    22. 1.2011 14:25:32
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.2, S.395-405
  4. D'Angelo, C.A.; Giuffrida, C.; Abramo, G.: ¬A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments (2011) 0.04
    0.038194444 = product of:
      0.07638889 = sum of:
        0.00972145 = weight(_text_:information in 4190) [ClassicSimilarity], result of:
          0.00972145 = score(doc=4190,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 4190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.06666744 = sum of:
          0.027983533 = weight(_text_:technology in 4190) [ClassicSimilarity], result of:
            0.027983533 = score(doc=4190,freq=2.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.19744103 = fieldWeight in 4190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=4190)
          0.038683902 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
            0.038683902 = score(doc=4190,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.23214069 = fieldWeight in 4190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4190)
      0.5 = coord(2/4)
    
    Date
    22. 1.2011 13:06:52
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.2, S.257-269
  5. Savoy, J.: Estimating the probability of an authorship attribution (2016) 0.03
    0.0318287 = product of:
      0.0636574 = sum of:
        0.008101207 = weight(_text_:information in 2937) [ClassicSimilarity], result of:
          0.008101207 = score(doc=2937,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 2937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2937)
        0.055556197 = sum of:
          0.02331961 = weight(_text_:technology in 2937) [ClassicSimilarity], result of:
            0.02331961 = score(doc=2937,freq=2.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.16453418 = fieldWeight in 2937, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2937)
          0.032236587 = weight(_text_:22 in 2937) [ClassicSimilarity], result of:
            0.032236587 = score(doc=2937,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.19345059 = fieldWeight in 2937, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2937)
      0.5 = coord(2/4)
    
    Date
    7. 5.2016 21:22:27
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.6, S.1462-1472
  6. Bloss, M.E.: Testing RDA at Dominican University's Graduate School of Library and Information Science : the students' perspectives (2011) 0.02
    0.021104999 = product of:
      0.042209998 = sum of:
        0.019644385 = weight(_text_:information in 1899) [ClassicSimilarity], result of:
          0.019644385 = score(doc=1899,freq=6.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.23515764 = fieldWeight in 1899, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1899)
        0.02256561 = product of:
          0.04513122 = sum of:
            0.04513122 = weight(_text_:22 in 1899) [ClassicSimilarity], result of:
              0.04513122 = score(doc=1899,freq=2.0), product of:
                0.16663991 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047586527 = queryNorm
                0.2708308 = fieldWeight in 1899, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1899)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Dominican University's Graduate School of Library and Information Science (GSLIS) was one of a funnel group of graduate schools of library and information science selected to test RDA. A seminar specifically for this purpose was conducted from August to December 2010. Fifteen students participated in the test, creating records in AACR2 and in RDA, encoding them in the MARC format, and responding to the required questionnaires. In addition to record creation, the students were also asked to submit a final paper in which they described their experiences and recommended whether or not to accept RDA as a replacement for AACR2.
    Date
    25. 5.2015 18:36:22
  7. Snow, K.; Hoffman, G.L.: What makes an effective cataloging course? : a study of the factors that promote learning (2015) 0.02
    0.01695365 = product of:
      0.0339073 = sum of:
        0.011341691 = weight(_text_:information in 2609) [ClassicSimilarity], result of:
          0.011341691 = score(doc=2609,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13576832 = fieldWeight in 2609, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2609)
        0.02256561 = product of:
          0.04513122 = sum of:
            0.04513122 = weight(_text_:22 in 2609) [ClassicSimilarity], result of:
              0.04513122 = score(doc=2609,freq=2.0), product of:
                0.16663991 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047586527 = queryNorm
                0.2708308 = fieldWeight in 2609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2609)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents the results of a research study, a survey of library and information science master's degree holders who have taken a beginning cataloging course, to identify the elements of a beginning cataloging course that help students to learn cataloging concepts and skills. The results suggest that cataloging practice (the hands-on creation of bibliographic records or catalog cards), the effectiveness of the instructor, a balance of theory and practice, and placing cataloging in a real-world context contribute to effective learning. However, more research is needed to determine how, and to what the extent, each element should be incorporated into beginning cataloging courses.
    Date
    10. 9.2000 17:38:22
  8. O'Neill, E.; Zumer, M.; Mixter, J.: FRBR aggregates : their types and frequency in library collections (2015) 0.02
    0.016545078 = product of:
      0.033090156 = sum of:
        0.013748205 = weight(_text_:information in 2610) [ClassicSimilarity], result of:
          0.013748205 = score(doc=2610,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.16457605 = fieldWeight in 2610, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2610)
        0.019341951 = product of:
          0.038683902 = sum of:
            0.038683902 = weight(_text_:22 in 2610) [ClassicSimilarity], result of:
              0.038683902 = score(doc=2610,freq=2.0), product of:
                0.16663991 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047586527 = queryNorm
                0.23214069 = fieldWeight in 2610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2610)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Aggregates have been a frequent topic of discussion between library science researchers. This study seeks to better understand aggregates through the analysis of a sample of bibliographic records and review of the cataloging treatment of aggregates. The study focuses on determining how common aggregates are in library collections, what types of aggregates exist, how aggregates are described in bibliographic records, and the criteria for identifying aggregates from the information in bibliographic records. A sample of bibliographic records representing textual resources was taken from OCLC's WorldCat database. More than 20 percent of the sampled records represented aggregates and more works were embodied in aggregates than were embodied in single work manifestations. A variety of issues, including cataloging practices and the varying definitions of aggregates, made it difficult to accurately identify and quantify the presence of aggregates using only the information from bibliographic records.
    Date
    10. 9.2000 17:38:22
  9. Guerrini, M.: Cataloguing based on bibliographic axiology (2010) 0.02
    0.015414905 = product of:
      0.03082981 = sum of:
        0.016838044 = weight(_text_:information in 2624) [ClassicSimilarity], result of:
          0.016838044 = score(doc=2624,freq=6.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.20156369 = fieldWeight in 2624, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2624)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 2624) [ClassicSimilarity], result of:
              0.027983533 = score(doc=2624,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 2624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2624)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The article presents the work of Elaine Svenonius The Intellectual Foundation of Information Organization, translated in Italian and published by Le Lettere of Florence, within the series Pinakes, with the title Il fondamento intellettuale dell'organizzazione dell'informazione. The Intellectual Foundation of Information Organization defines the theoretical aspects of library science, its philosophical basics and principles, the purposes that must be kept in mind, abstracting from the technology used in a library. The book deals with information organization and bibliographic universe, in particular using the bibliographic entities defined in FRBR, at first. Then, it analyzes all the specific languages by which works and subjects are treated. This work, already acknowledged as a classic, organizes, synthesizes and make easily understood the whole complex of knowledge, practices and procedures developed in the last 150 years.
  10. Mercun, T.; Zumer, M.; Aalberg, T.: Presenting bibliographic families using information visualization : evaluation of FRBR-based prototype and hierarchical visualizations (2017) 0.01
    0.014887327 = product of:
      0.029774655 = sum of:
        0.01811485 = weight(_text_:information in 3350) [ClassicSimilarity], result of:
          0.01811485 = score(doc=3350,freq=10.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.21684799 = fieldWeight in 3350, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3350)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 3350) [ClassicSimilarity], result of:
              0.02331961 = score(doc=3350,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 3350, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3350)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Since their beginnings, bibliographic information systems have been displaying results in the form of long, textual lists. With the development of new data models and computer technologies, the need for new approaches to present and interact with bibliographic data has slowly been maturing. To investigate how this could be accomplished, a prototype system, FrbrVis1, was designed to present work families within a bibliographic information system using information visualization. This paper reports on two user studies, a controlled and an observational experiment, that have been carried out to assess the Functional Requirements for Bibliographic Records (FRBR)-based against an existing system as well as to test four different hierarchical visual layouts. The results clearly show that FrbrVis offers better performance and user experience compared to the baseline system. The differences between the four hierarchical visualizations (Indented tree, Radial tree, Circlepack, and Sunburst) were, on the other hand, not as pronounced, but the Indented tree and Sunburst design proved to be the most successful, both in performance as well as user perception. The paper therefore not only evaluates the application of a visual presentation of bibliographic work families, but also provides valuable results regarding the performance and user acceptance of individual hierarchical visualization techniques.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.392-411
  11. Delsey, T.: ¬The Making of RDA (2016) 0.01
    0.0145317 = product of:
      0.0290634 = sum of:
        0.00972145 = weight(_text_:information in 2946) [ClassicSimilarity], result of:
          0.00972145 = score(doc=2946,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 2946, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2946)
        0.019341951 = product of:
          0.038683902 = sum of:
            0.038683902 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.038683902 = score(doc=2946,freq=2.0), product of:
                0.16663991 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047586527 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
  12. Tosaka, Y.; Park, J.-r.: RDA: Resource description & access : a survey of the current state of the art (2013) 0.01
    0.01393111 = product of:
      0.02786222 = sum of:
        0.016202414 = weight(_text_:information in 677) [ClassicSimilarity], result of:
          0.016202414 = score(doc=677,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.19395474 = fieldWeight in 677, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=677)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 677) [ClassicSimilarity], result of:
              0.02331961 = score(doc=677,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 677, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=677)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Resource Description & Access (RDA) is intended to provide a flexible and extensible framework that can accommodate all types of content and media within rapidly evolving digital environments while also maintaining compatibility with the Anglo-American Cataloguing Rules, 2nd edition (AACR2). The cataloging community is grappling with practical issues in navigating the transition from AACR2 to RDA; there is a definite need to evaluate major subject areas and broader themes in information organization under the new RDA paradigm. This article aims to accomplish this task through a thorough and critical review of the emerging RDA literature published from 2005 to 2011. The review mostly concerns key areas of difference between RDA and AACR2, the relationship of the new cataloging code to metadata standards, the impact on encoding standards such as Machine-Readable Cataloging (MARC), end user considerations, and practitioners' views on RDA implementation and training. Future research will require more in-depth studies of RDA's expected benefits and the manner in which the new cataloging code will improve resource retrieval and bibliographic control for users and catalogers alike over AACR2. The question as to how the cataloging community can best move forward to the post-AACR2/MARC environment must be addressed carefully so as to chart the future of bibliographic control in the evolving environment of information production, management, and use.
    Series
    Advances in information science
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.651-662
  13. Normore, L.F.: "Here be dragons" : a wayfinding approach to teaching cataloguing (2012) 0.01
    0.012109751 = product of:
      0.024219502 = sum of:
        0.008101207 = weight(_text_:information in 1903) [ClassicSimilarity], result of:
          0.008101207 = score(doc=1903,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 1903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1903)
        0.016118294 = product of:
          0.032236587 = sum of:
            0.032236587 = weight(_text_:22 in 1903) [ClassicSimilarity], result of:
              0.032236587 = score(doc=1903,freq=2.0), product of:
                0.16663991 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19345059 = fieldWeight in 1903, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1903)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Teaching cataloguing requires the instructor to make strategic decisions about how to approach the variety and complexity of the field and to provide an adequate theoretical foundation while preparing students for their entry into the world of practice. Accompanying these challenges are the tactical demands of providing this instruction in a distance education environment. Rather than focusing on ways to support learners in catalogue record production, instructors may use a problem solving and decision making approach to instruction. In this paper, a way to conceptualize a decision making approach that builds on a foundation provided by theories of information navigation is described. This approach, which is called "wayfinding", teaches by having students learn to find their way in the sets of rules that are commonly used. The method focuses on instruction about the structural features of rule sets, providing basic definitions of what each of the "places" in the rule sets contain (e.g., "formatting personal names" in Chapter 22 of AACR2R) and about ways to navigate those structures, enabling students to learn not only about common rules but also about less well known cataloguing practices ("dragons"). It provides both pragmatic and pedagogical benefits and helps develop links between cataloguing practices and their theoretical foundations.
  14. Coyle, K.: Simplicity in data models (2015) 0.01
    0.011856608 = product of:
      0.023713216 = sum of:
        0.00972145 = weight(_text_:information in 2025) [ClassicSimilarity], result of:
          0.00972145 = score(doc=2025,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 2025, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2025)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 2025) [ClassicSimilarity], result of:
              0.027983533 = score(doc=2025,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 2025, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2025)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.30-33
  15. Miller, E.; Ogbuji, U.: Linked data design for the visible library (2015) 0.01
    0.011856608 = product of:
      0.023713216 = sum of:
        0.00972145 = weight(_text_:information in 2773) [ClassicSimilarity], result of:
          0.00972145 = score(doc=2773,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 2773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2773)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 2773) [ClassicSimilarity], result of:
              0.027983533 = score(doc=2773,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 2773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2773)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.23-29
  16. Kocher, M.; Savoy, J.: ¬A simple and efficient algorithm for authorship verification (2017) 0.01
    0.011856608 = product of:
      0.023713216 = sum of:
        0.00972145 = weight(_text_:information in 3330) [ClassicSimilarity], result of:
          0.00972145 = score(doc=3330,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 3330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3330)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 3330) [ClassicSimilarity], result of:
              0.027983533 = score(doc=3330,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 3330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3330)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.259-269
  17. Santana, A.F.; Gonçalves, M.A.; Laender, A.H.F.; Ferreira, A.A.: Incremental author name disambiguation by exploiting domain-specific heuristics (2017) 0.01
    0.011856608 = product of:
      0.023713216 = sum of:
        0.00972145 = weight(_text_:information in 3587) [ClassicSimilarity], result of:
          0.00972145 = score(doc=3587,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 3587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3587)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 3587) [ClassicSimilarity], result of:
              0.027983533 = score(doc=3587,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 3587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3587)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.931-945
  18. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.01
    0.011558321 = product of:
      0.023116643 = sum of:
        0.011456838 = weight(_text_:information in 3373) [ClassicSimilarity], result of:
          0.011456838 = score(doc=3373,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13714671 = fieldWeight in 3373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 3373) [ClassicSimilarity], result of:
              0.02331961 = score(doc=3373,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 3373, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  19. Wakeling, S.; Clough, P.; Connaway, L.S.; Sen, B.; Tomás, D.: Users and uses of a global union catalog : a mixed-methods study of WorldCat.org (2017) 0.01
    0.011558321 = product of:
      0.023116643 = sum of:
        0.011456838 = weight(_text_:information in 3794) [ClassicSimilarity], result of:
          0.011456838 = score(doc=3794,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13714671 = fieldWeight in 3794, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3794)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 3794) [ClassicSimilarity], result of:
              0.02331961 = score(doc=3794,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 3794, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3794)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents the first large-scale investigation of the users and uses of WorldCat.org, the world's largest bibliographic database and global union catalog. Using a mixed-methods approach involving focus group interviews with 120 participants, an online survey with 2,918 responses, and an analysis of transaction logs of approximately 15 million sessions from WorldCat.org, the study provides a new understanding of the context for global union catalog use. We find that WorldCat.org is accessed by a diverse population, with the three primary user groups being librarians, students, and academics. Use of the system is found to fall within three broad types of work-task (professional, academic, and leisure), and we also present an emergent taxonomy of search tasks that encompass known-item, unknown-item, and institutional information searches. Our results support the notion that union catalogs are primarily used for known-item searches, although the volume of traffic to WorldCat.org means that unknown-item searches nonetheless represent an estimated 250,000 sessions per month. Search engine referrals account for almost half of all traffic, but although WorldCat.org effectively connects users referred from institutional library catalogs to other libraries holding a sought item, users arriving from a search engine are less likely to connect to a library.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.9, S.2166-2181
  20. Potha, N.; Stamatatos, E.: Improving author verification based on topic modeling (2019) 0.01
    0.011558321 = product of:
      0.023116643 = sum of:
        0.011456838 = weight(_text_:information in 5385) [ClassicSimilarity], result of:
          0.011456838 = score(doc=5385,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13714671 = fieldWeight in 5385, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5385)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 5385) [ClassicSimilarity], result of:
              0.02331961 = score(doc=5385,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 5385, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5385)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Authorship analysis attempts to reveal information about authors of digital documents enabling applications in digital humanities, text forensics, and cyber-security. Author verification is a fundamental task where, given a set of texts written by a certain author, we should decide whether another text is also by that author. In this article we systematically study the usefulness of topic modeling in author verification. We examine several author verification methods that cover the main paradigms, namely, intrinsic (attempt to solve a one-class classification task) and extrinsic (attempt to solve a binary classification task) methods as well as profile-based (all documents of known authorship are treated cumulatively) and instance-based (each document of known authorship is treated separately) approaches combined with well-known topic modeling methods such as Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA). We use benchmark data sets and demonstrate that LDA is better combined with extrinsic methods, while the most effective intrinsic method is based on LSI. Moreover, topic modeling seems to be particularly effective for profile-based approaches and the performance is enhanced when latent topics are extracted by an enriched set of documents. The comparison to state-of-the-art methods demonstrates the great potential of the approaches presented in this study. It is also demonstrates that even when genre-agnostic external documents are used, the proposed extrinsic models are very competitive.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.10, S.1074-1088

Languages

  • e 109
  • d 4
  • i 2
  • More… Less…

Types