Search (50 results, page 1 of 3)

  • × theme_ss:"Formalerschließung"
  • × year_i:[2020 TO 2030}
  1. Hjoerland, B.: Bibliographical control (2023) 0.04
    0.035254303 = product of:
      0.08813576 = sum of:
        0.070290476 = weight(_text_:philosophy in 1131) [ClassicSimilarity], result of:
          0.070290476 = score(doc=1131,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.30488142 = fieldWeight in 1131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1131)
        0.017845279 = weight(_text_:of in 1131) [ClassicSimilarity], result of:
          0.017845279 = score(doc=1131,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27317715 = fieldWeight in 1131, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1131)
      0.4 = coord(2/5)
    
    Abstract
    Section 1 of this article discusses the concept of bibliographical control and makes a distinction between this term, "bibliographical description," and related terms, which are often confused in the literature. It further discusses the function of bibliographical control and criticizes Patrick Wilson's distinction between "exploitative control" and "descriptive control." Section 2 presents projects for establishing bibliographic control from the Library of Alexandria to the Internet and Google, and it is found that these projects have often been dominated by a positivist dream to make all information in the world available to everybody. Section 3 discusses the theoretical problems of providing comprehensive coverage and retrieving documents represented in databases and argues that 100% coverage and retrievability is an unobtainable ideal. It is shown that bibliographical control has been taken very seriously in the field of medicine, where knowledge of the most important findings is of utmost importance. In principle, it is equally important in all other domains. The conclusion states that the alternative to a positivist dream of complete bibliographic control is a pragmatic philosophy aiming at optimizing bibliographic control supporting specific activities, perspectives, and interests.
    Series
    Reviews of concepts in knowledge organization
  2. Preminger, M.; Rype, I.; Ådland, M.K.; Massey, D.; Tallerås, K.: ¬The public library metadata landscape : the case of Norway 2017-2018 (2020) 0.03
    0.030646285 = product of:
      0.07661571 = sum of:
        0.013683967 = weight(_text_:of in 5802) [ClassicSimilarity], result of:
          0.013683967 = score(doc=5802,freq=6.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.20947541 = fieldWeight in 5802, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5802)
        0.062931746 = product of:
          0.12586349 = sum of:
            0.12586349 = weight(_text_:mind in 5802) [ClassicSimilarity], result of:
              0.12586349 = score(doc=5802,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.48272148 = fieldWeight in 5802, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5802)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The aim of this paper is to gauge the cataloging practices within the public library sector seen from the catalog with Norway as a case, based on a sample of records from public libraries and cataloging agencies. Findings suggest that libraries make few changes to records they import from central agencies, and that larger libraries make more changes than smaller libraries. Findings also suggest that libraries catalog and modify records with their patrons in mind, and though the extent is not large, cataloging proficiency is still required in the public library domain, at least in larger libraries, in order to ensure correct and consistent metadata.
  3. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.02
    0.021049907 = product of:
      0.03508318 = sum of:
        0.008315044 = product of:
          0.041575223 = sum of:
            0.041575223 = weight(_text_:problem in 883) [ClassicSimilarity], result of:
              0.041575223 = score(doc=883,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23447686 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.2 = coord(1/5)
        0.012618518 = weight(_text_:of in 883) [ClassicSimilarity], result of:
          0.012618518 = score(doc=883,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.19316542 = fieldWeight in 883, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=883)
        0.0141496165 = product of:
          0.028299233 = sum of:
            0.028299233 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
              0.028299233 = score(doc=883,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.19345059 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.168-185
  4. Morris, V.: Automated language identification of bibliographic resources (2020) 0.02
    0.017902408 = product of:
      0.044756018 = sum of:
        0.02211663 = weight(_text_:of in 5749) [ClassicSimilarity], result of:
          0.02211663 = score(doc=5749,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.33856338 = fieldWeight in 5749, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.045278773 = score(doc=5749,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  5. Das, S.; Paik, J.H.: Gender tagging of named entities using retrieval-assisted multi-context aggregation : an unsupervised approach (2023) 0.01
    0.013426805 = product of:
      0.03356701 = sum of:
        0.016587472 = weight(_text_:of in 941) [ClassicSimilarity], result of:
          0.016587472 = score(doc=941,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25392252 = fieldWeight in 941, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=941)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 941) [ClassicSimilarity], result of:
              0.033959076 = score(doc=941,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=941)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Inferring the gender of named entities present in a text has several practical applications in information sciences. Existing approaches toward name gender identification rely exclusively on using the gender distributions from labeled data. In the absence of such labeled data, these methods fail. In this article, we propose a two-stage model that is able to infer the gender of names present in text without requiring explicit name-gender labels. We use coreference resolution as the backbone for our proposed model. To aid coreference resolution where the existing contextual information does not suffice, we use a retrieval-assisted context aggregation framework. We demonstrate that state-of-the-art name gender inference is possible without supervision. Our proposed method matches or outperforms several supervised approaches and commercially used methods on five English language datasets from different domains.
    Date
    22. 3.2023 12:00:14
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.461-475
  6. Pooja, K.M.; Mondal, S.; Chandra, J.: ¬A graph combination with edge pruning-based approach for author name disambiguation (2020) 0.01
    0.01296636 = product of:
      0.0324159 = sum of:
        0.018593006 = product of:
          0.09296503 = sum of:
            0.09296503 = weight(_text_:problem in 59) [ClassicSimilarity], result of:
              0.09296503 = score(doc=59,freq=10.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.52430624 = fieldWeight in 59, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=59)
          0.2 = coord(1/5)
        0.013822895 = weight(_text_:of in 59) [ClassicSimilarity], result of:
          0.013822895 = score(doc=59,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.21160212 = fieldWeight in 59, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=59)
      0.4 = coord(2/5)
    
    Abstract
    Author name disambiguation (AND) is a challenging problem due to several issues such as missing key identifiers, same name corresponding to multiple authors, along with inconsistent representation. Several techniques have been proposed but maintaining consistent accuracy levels over all data sets is still a major challenge. We identify two major issues associated with the AND problem. First, the namesake problem in which two or more authors with the same name publishes in a similar domain. Second, the diverse topic problem in which one author publishes in diverse topical domains with a different set of coauthors. In this work, we initially propose a method named ATGEP for AND that addresses the namesake issue. We evaluate the performance of ATGEP using various ambiguous name references collected from the Arnetminer Citation (AC) and Web of Science (WoS) data set. We empirically show that the two aforementioned problems are crucial to address the AND problem that are difficult to handle using state-of-the-art techniques. To handle the diverse topic issue, we extend ATGEP to a new variant named ATGEP-web that considers external web information of the authors. Experiments show that with enough information available from external web sources ATGEP-web can significantly improve the results further compared with ATGEP.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.1, S.69-83
  7. Kim, J.(im); Kim, J.(enna): Effect of forename string on author name disambiguation (2020) 0.01
    0.012797958 = product of:
      0.031994894 = sum of:
        0.017845279 = weight(_text_:of in 5930) [ClassicSimilarity], result of:
          0.017845279 = score(doc=5930,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27317715 = fieldWeight in 5930, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5930)
        0.0141496165 = product of:
          0.028299233 = sum of:
            0.028299233 = weight(_text_:22 in 5930) [ClassicSimilarity], result of:
              0.028299233 = score(doc=5930,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.19345059 = fieldWeight in 5930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5930)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In author name disambiguation, author forenames are used to decide which name instances are disambiguated together and how much they are likely to refer to the same author. Despite such a crucial role of forenames, their effect on the performance of heuristic (string matching) and algorithmic disambiguation is not well understood. This study assesses the contributions of forenames in author name disambiguation using multiple labeled data sets under varying ratios and lengths of full forenames, reflecting real-world scenarios in which an author is represented by forename variants (synonym) and some authors share the same forenames (homonym). The results show that increasing the ratios of full forenames substantially improves both heuristic and machine-learning-based disambiguation. Performance gains by algorithmic disambiguation are pronounced when many forenames are initialized or homonyms are prevalent. As the ratios of full forenames increase, however, they become marginal compared to those by string matching. Using a small portion of forename strings does not reduce much the performances of both heuristic and algorithmic disambiguation methods compared to using full-length strings. These findings provide practical suggestions, such as restoring initialized forenames into a full-string format via record linkage for improved disambiguation performances.
    Date
    11. 7.2020 13:22:58
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.7, S.839-855
  8. Kim, J.; Kim, J.; Owen-Smith, J.: Ethnicity-based name partitioning for author name disambiguation using supervised machine learning (2021) 0.01
    0.0092981905 = product of:
      0.023245476 = sum of:
        0.008315044 = product of:
          0.041575223 = sum of:
            0.041575223 = weight(_text_:problem in 311) [ClassicSimilarity], result of:
              0.041575223 = score(doc=311,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23447686 = fieldWeight in 311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=311)
          0.2 = coord(1/5)
        0.014930432 = weight(_text_:of in 311) [ClassicSimilarity], result of:
          0.014930432 = score(doc=311,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.22855641 = fieldWeight in 311, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=311)
      0.4 = coord(2/5)
    
    Abstract
    In several author name disambiguation studies, some ethnic name groups such as East Asian names are reported to be more difficult to disambiguate than others. This implies that disambiguation approaches might be improved if ethnic name groups are distinguished before disambiguation. We explore the potential of ethnic name partitioning by comparing performance of four machine learning algorithms trained and tested on the entire data or specifically on individual name groups. Results show that ethnicity-based name partitioning can substantially improve disambiguation performance because the individual models are better suited for their respective name group. The improvements occur across all ethnic name groups with different magnitudes. Performance gains in predicting matched name pairs outweigh losses in predicting nonmatched pairs. Feature (e.g., coauthor name) similarities of name pairs vary across ethnic name groups. Such differences may enable the development of ethnicity-specific feature weights to improve prediction for specific ethic name categories. These findings are observed for three labeled data with a natural distribution of problem sizes as well as one in which all ethnic name groups are controlled for the same sizes of ambiguous names. This study is expected to motive scholars to group author names based on ethnicity prior to disambiguation.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.8, S.979-994
  9. Dutkiewicz, S.M.: Application of faceted vocabularies to cataloging of textbooks (2023) 0.01
    0.005473587 = product of:
      0.027367935 = sum of:
        0.027367935 = weight(_text_:of in 1179) [ClassicSimilarity], result of:
          0.027367935 = score(doc=1179,freq=24.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.41895083 = fieldWeight in 1179, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1179)
      0.2 = coord(1/5)
    
    Abstract
    This article discusses the practical application of faceted vocabularies to the cataloging of textbooks. Consistent application of faceted vocabularies, specifically Library of Congress Genre/Form Terms for Library and Archival Materials (LCGFT) and Library of Congress Demographic Group Terms (LCDGT), would enhance the discovery of these resources. Alternatives to special cases in Subject Heading Manual H 2187 are proposed. A case study demonstrating the application of LCDGT is provided. Figures illustrate the results of the proposed best practices. The article includes four tables that are designed to streamline term assignments. Consistent cataloging of genre and audience prepares legacy records for future automated enhancement.
    Footnote
    Beitrag in Themenheft: Implementation of Faceted Vocabularies.
  10. Dobreski, B.: Common usage as warrant in bibliographic description (2020) 0.01
    0.005172053 = product of:
      0.025860265 = sum of:
        0.025860265 = weight(_text_:of in 5708) [ClassicSimilarity], result of:
          0.025860265 = score(doc=5708,freq=42.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.39587128 = fieldWeight in 5708, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5708)
      0.2 = coord(1/5)
    
    Abstract
    Purpose Within standards for bibliographic description, common usage has served as a prominent design principle, guiding the choice and form of certain names and titles. In practice, however, the determination of common usage is difficult and lends itself to varying interpretations. The purpose of this paper is to explore the presence and role of common usage in bibliographic description through an examination of previously unexplored connections between common usage and the concept of warrant. Design/methodology/approach A brief historical review of the concept of common usage was conducted, followed by a case study of the current bibliographic standard Resource Description and Access (RDA) employing qualitative content analysis to examine the appearances, delineations and functions of common usage. Findings were then compared to the existing literature on warrant in knowledge organization. Findings Multiple interpretations of common usage coexist within RDA and its predecessors, and the current prioritization of these interpretations tends to render user perspectives secondary to those of creators, scholars and publishers. These varying common usages and their overall reliance on concrete sources of evidence reveal a mixture of underlying warrants, with literary warrant playing a more prominent role in comparison to the also present scientific/philosophical, use and autonomous warrants. Originality/value This paper offers new understanding of the concept of common usage, and adds to the body of work examining warrant in knowledge organization practices beyond classification. It sheds light on the design of the influential standard RDA while revealing the implications of naming and labeling in widely shared bibliographic data.
    Source
    Journal of documentation. 76(2020) no.1, S.49-66
  11. Handis, M.W.: Greek subject and name authorities, and the Library of Congress (2020) 0.00
    0.004469165 = product of:
      0.022345824 = sum of:
        0.022345824 = weight(_text_:of in 5801) [ClassicSimilarity], result of:
          0.022345824 = score(doc=5801,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34207192 = fieldWeight in 5801, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5801)
      0.2 = coord(1/5)
    
    Abstract
    Some international libraries are still using the Anglo-American Cataloging Rules, 2nd edition revised, for cataloging even though the Library of Congress and other large libraries have retired it in favor of Resource Description and Access. One of these libraries is the National Library of Greece, which consults the Library of Congress database before establishing authorities. There are cultural differences in names and subjects between the Library of Congress and the National Library, but some National Library terms may be more appropriate for users than the Library of Congress-established forms.
  12. Holden, C.: ¬The bibliographic work : history, theory, and practice (2021) 0.00
    0.004469165 = product of:
      0.022345824 = sum of:
        0.022345824 = weight(_text_:of in 120) [ClassicSimilarity], result of:
          0.022345824 = score(doc=120,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34207192 = fieldWeight in 120, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=120)
      0.2 = coord(1/5)
    
    Abstract
    The bibliographic work has assumed a great deal of importance in modern cataloging. But the concept of the work has existed for over a century, and even some of the earliest catalog codes differentiate between the intellectual work and its instances. This article will delve into the history and theory of the work, providing a basic overview of the concept as well as a summary of the myriad uses of the work throughout the history of cataloging. In addition to monographs, this paper will look at the work as applied to music, moving images, serials, and aggregates.
  13. Boruah, B.B.; Ravikumar, S.; Gayang, F.L.: Consistency, extent, and validation of the utilization of the MARC 21 bibliographic standard in the college libraries of Assam in India (2023) 0.00
    0.004469165 = product of:
      0.022345824 = sum of:
        0.022345824 = weight(_text_:of in 1183) [ClassicSimilarity], result of:
          0.022345824 = score(doc=1183,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34207192 = fieldWeight in 1183, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1183)
      0.2 = coord(1/5)
    
    Abstract
    This paper brings light to the existing practice of cataloging in the college libraries of Assam in terms of utilizing the MARC 21 standard and its structure, i.e., the tags, subfield codes, and indicators. Catalog records from six college libraries are collected and a survey is conducted to understand the local users' information requirements for the catalog. Places, where libraries have scope to improve and which divisions of tags could be more helpful for them in information retrieval, are identified and suggested. This study fulfilled the need for local-level assessment of the catalogs.
  14. Fernanda de Jesus, A.; Ferreira de Castro, F.: Proposal for the publication of linked open bibliographic data (2024) 0.00
    0.004282867 = product of:
      0.021414334 = sum of:
        0.021414334 = weight(_text_:of in 1161) [ClassicSimilarity], result of:
          0.021414334 = score(doc=1161,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.32781258 = fieldWeight in 1161, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
      0.2 = coord(1/5)
    
    Abstract
    Linked Open Data (LOD) are a set of principles for publishing structured, connected data available for reuse under an open license. The objective of this paper is to analyze the publishing of bibliographic data such as LOD, having as a product the elaboration of theoretical-methodological recommendations for the publication of these data, in an approach based on the ten best practices for publishing LOD, from the World Wide Web Consortium. The starting point was the conduction of a Systematic Review of Literature, where initiatives to publish bibliographic data such as LOD were identified. An empirical study of these institutions was also conducted. As a result, theoretical-methodological recommendations were obtained for the process of publishing bibliographic data such as LOD.
  15. Samples, J.; Bigelow, I.: MARC to BIBFRAME : converting the PCC to Linked Data (2020) 0.00
    0.0041805212 = product of:
      0.020902606 = sum of:
        0.020902606 = weight(_text_:of in 119) [ClassicSimilarity], result of:
          0.020902606 = score(doc=119,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.31997898 = fieldWeight in 119, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=119)
      0.2 = coord(1/5)
    
    Abstract
    The Program for Cooperative Cataloging (PCC) has formal relationships with the Library of Congress (LC), Share-VDE, and Linked Data for Production Phase 2 (LD4P2) for work on Bibliographic Framework (BIBFRAME), and PCC institutions have been very active in the exploration of MARC to BIBFRAME conversion processes. This article will review the involvement of PCC in the development of BIBFRAME and examine the work of LC, Share-VDE, and LD4P2 on MARC to BIBFRAME conversion. It will conclude with a discussion of areas for further exploration by the PCC leading up to the creation of PCC conversion specifications and PCC BIBFRAME data.
  16. Martin, J.M.: Records, responsibility, and power : an overview of cataloging ethics (2021) 0.00
    0.0040630843 = product of:
      0.02031542 = sum of:
        0.02031542 = weight(_text_:of in 708) [ClassicSimilarity], result of:
          0.02031542 = score(doc=708,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3109903 = fieldWeight in 708, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=708)
      0.2 = coord(1/5)
    
    Abstract
    Ethics are principles which provide a framework for making decisions that best reflect a set of values. Cataloging carries power, so ethical decision-making is crucial. Because cataloging requires decision-making in areas that differ from other library work, cataloging ethics are a distinct subset of library ethics. Cataloging ethics draw on the primary values of serving the needs of users and providing access to materials. Cataloging ethics are not new, but they have received increased attention since the 1970s. Major current issues in cataloging ethics include the creation of a code of ethics; ongoing debate on the appropriate role of neutrality in cataloging misleading materials and in subject heading lists and classification schemes; how and to what degree considerations of privacy and self-determination should shape authority work; and whether or not our current cataloging codes are sufficiently user-focused.
  17. Koster, L.: Persistent identifiers for heritage objects (2020) 0.00
    0.003909705 = product of:
      0.019548526 = sum of:
        0.019548526 = weight(_text_:of in 5718) [ClassicSimilarity], result of:
          0.019548526 = score(doc=5718,freq=24.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 5718, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5718)
      0.2 = coord(1/5)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  18. Soos, C.; Leazer, H.H.: Presentations of authorship in knowledge organization (2020) 0.00
    0.003909705 = product of:
      0.019548526 = sum of:
        0.019548526 = weight(_text_:of in 21) [ClassicSimilarity], result of:
          0.019548526 = score(doc=21,freq=24.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 21, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=21)
      0.2 = coord(1/5)
    
    Abstract
    The "author" is a concept central to many publication and documentation practices, often carrying legal, professional, social, and personal importance. Typically viewed as the solitary owner of their creations, a person is held responsible for their work and positioned to receive the praise and criticism that may emerge in its wake. Although the role of the individual within creative production is undeniable, literary (Foucault 1977; Bloom 1997) and knowledge organization (Moulaison et. al. 2014) theorists have challenged the view that the work of one person can-or should-be fully detached from their professional and personal networks. As these relationships often provide important context and reveal the role of community in the creation of new things, their absence from catalog records presents a falsely simplified view of the creative process. Here, we address the consequences of what we call the "author-asowner" concept and suggest that an "author-as-node" approach, which situates an author within their networks of influence, may allow for more relational representation within knowledge organization systems, a framing that emphasizes rather than erases the messy complexities that affect the production of new objects and ideas.
    Content
    Part of a special issue: The politics of knowledge organization, Part 2; guest editors: Robert D. Montoya and Gregory H. Leazer. DOI:10.5771/0943-7444-2020-6-486.
  19. Zakaria, M.S.: Measuring typographical errors in online catalogs of academic libraries using Ballard's list : a case study from Egypt (2023) 0.00
    0.003909705 = product of:
      0.019548526 = sum of:
        0.019548526 = weight(_text_:of in 1184) [ClassicSimilarity], result of:
          0.019548526 = score(doc=1184,freq=24.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 1184, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
      0.2 = coord(1/5)
    
    Abstract
    Typographical errors in bibliographic records of online library catalogs are a common troublesome phenomenon, spread all over the world. They can affect the retrieval and identification of items in information retrieval systems and thus prevent users from finding the documents they need. The present study was conducted to measure typographical errors in the online catalog of the Egyptian Universities Libraries Consortium (EULC). The investigation depended on Terry Ballard's typographical error terms list. The EULC catalog was searched to identify matched erroneous records. The study found that the total number of erroneous records reached 1686, whereas the mean error rate for each record is 11.24, which is very high. About 396 erroneous records (23.49%) have been retrieved from Section C of Ballard's list (Moderate Probability). The typographical errors found within the abstracts of the study's sample records represented 35.82%. Omissions were the first common type of errors with 54.51%, followed by transpositions at 17.08%. Regarding the analysis of parts of speech, the study found that 63.46% of errors occur in noun terms. The results of the study indicated that typographical errors still pose a serious challenge for information retrieval systems, especially for library systems in the Arab environment. The study proposes some solutions for Egyptian university libraries in order to avoid typographic mistakes in the future.
  20. Thomas, S.E.: ¬The Program for Cooperative Cataloging : backstory and future potential (2020) 0.00
    0.0038704101 = product of:
      0.01935205 = sum of:
        0.01935205 = weight(_text_:of in 124) [ClassicSimilarity], result of:
          0.01935205 = score(doc=124,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.29624295 = fieldWeight in 124, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=124)
      0.2 = coord(1/5)
    
    Abstract
    In 1988 the Library of Congress and eight library participants undertook a two-year pilot known as the National Coordinated Cataloging Program (NCCP) to increase the number of quality bibliographic records. Subsequently the Bibliographic Services Study Committee reviewed the pilot. Discussions held at the Library of Congress (LC) and in other fora resulted in the creation of the Cooperative Cataloging Council, and, ultimately, the establishment of the Program for Cooperative Cataloging (PCC) in 1994. The conditions that contributed to a successful approach to shared cataloging are described. The article concludes with considerations for expanding the future effectiveness of the PCC.

Types

  • a 48
  • el 2
  • m 2
  • More… Less…

Classifications