Search (61 results, page 1 of 4)

  • × theme_ss:"Formalerschließung"
  • × year_i:[2020 TO 2030}
  1. Miksa, S.D.: Cataloging principles and objectives : history and development (2021) 0.06
    0.063573636 = product of:
      0.105956055 = sum of:
        0.005779455 = weight(_text_:a in 702) [ClassicSimilarity], result of:
          0.005779455 = score(doc=702,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 702, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=702)
        0.095440306 = weight(_text_:91 in 702) [ClassicSimilarity], result of:
          0.095440306 = score(doc=702,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.3693884 = fieldWeight in 702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046875 = fieldNorm(doc=702)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 702) [ClassicSimilarity], result of:
              0.009472587 = score(doc=702,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 702, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=702)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Cataloging principles and objectives guide the formation of cataloging rules governing the organization of information within the library catalog, as well as the function of the catalog itself. Changes in technologies wrought by the internet and the web have been the driving forces behind shifting cataloging practice and reconfigurations of cataloging rules. Modern cataloging principles and objectives started in 1841 with the creation of Panizzi's 91 Rules for the British Museum and gained momentum with Charles Cutter's Rules for Descriptive Cataloging (1904). The first Statement of International Cataloguing Principles (ICP) was adopted in 1961, holding their place through such codifications as AACR and AACR2 in the 1970s and 1980s. Revisions accelerated starting in 2003 with the three original FR models. The Library Reference Model (LRM) in 2017 acted as a catalyst for the evolution of principles and objectives culminating in the creation of Resource Description and Access (RDA) in 2013.
    Type
    a
  2. Fisher, M.; Rafferty, P.: Current issues with cataloging printed music : challenges facing staff and systems (2024) 0.05
    0.046445932 = product of:
      0.116114825 = sum of:
        0.004767807 = weight(_text_:a in 1151) [ClassicSimilarity], result of:
          0.004767807 = score(doc=1151,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 1151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1151)
        0.11134702 = weight(_text_:91 in 1151) [ClassicSimilarity], result of:
          0.11134702 = score(doc=1151,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.43095312 = fieldWeight in 1151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1151)
      0.4 = coord(2/5)
    
    Source
    Cataloging and classification quarterly. 61(2023) no.1, p.91-117
    Type
    a
  3. Morris, V.: Automated language identification of bibliographic resources (2020) 0.03
    0.027334882 = product of:
      0.0683372 = sum of:
        0.005448922 = weight(_text_:a in 5749) [ClassicSimilarity], result of:
          0.005448922 = score(doc=5749,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.06288828 = sum of:
          0.012630116 = weight(_text_:information in 5749) [ClassicSimilarity], result of:
            0.012630116 = score(doc=5749,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1551638 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
          0.050258167 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
            0.050258167 = score(doc=5749,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.30952093 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
      0.4 = coord(2/5)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
    Type
    a
  4. Das, S.; Paik, J.H.: Gender tagging of named entities using retrieval-assisted multi-context aggregation : an unsupervised approach (2023) 0.02
    0.024909604 = product of:
      0.06227401 = sum of:
        0.008173384 = weight(_text_:a in 941) [ClassicSimilarity], result of:
          0.008173384 = score(doc=941,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 941, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=941)
        0.054100625 = sum of:
          0.016407004 = weight(_text_:information in 941) [ClassicSimilarity], result of:
            0.016407004 = score(doc=941,freq=6.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.20156369 = fieldWeight in 941, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=941)
          0.037693623 = weight(_text_:22 in 941) [ClassicSimilarity], result of:
            0.037693623 = score(doc=941,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 941, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=941)
      0.4 = coord(2/5)
    
    Abstract
    Inferring the gender of named entities present in a text has several practical applications in information sciences. Existing approaches toward name gender identification rely exclusively on using the gender distributions from labeled data. In the absence of such labeled data, these methods fail. In this article, we propose a two-stage model that is able to infer the gender of names present in text without requiring explicit name-gender labels. We use coreference resolution as the backbone for our proposed model. To aid coreference resolution where the existing contextual information does not suffice, we use a retrieval-assisted context aggregation framework. We demonstrate that state-of-the-art name gender inference is possible without supervision. Our proposed method matches or outperforms several supervised approaches and commercially used methods on five English language datasets from different domains.
    Date
    22. 3.2023 12:00:14
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.461-475
    Type
    a
  5. Kim, J.(im); Kim, J.(enna): Effect of forename string on author name disambiguation (2020) 0.02
    0.018446533 = product of:
      0.04611633 = sum of:
        0.0068111527 = weight(_text_:a in 5930) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=5930,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 5930, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5930)
        0.039305177 = sum of:
          0.007893822 = weight(_text_:information in 5930) [ClassicSimilarity], result of:
            0.007893822 = score(doc=5930,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 5930, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5930)
          0.031411353 = weight(_text_:22 in 5930) [ClassicSimilarity], result of:
            0.031411353 = score(doc=5930,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 5930, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5930)
      0.4 = coord(2/5)
    
    Abstract
    In author name disambiguation, author forenames are used to decide which name instances are disambiguated together and how much they are likely to refer to the same author. Despite such a crucial role of forenames, their effect on the performance of heuristic (string matching) and algorithmic disambiguation is not well understood. This study assesses the contributions of forenames in author name disambiguation using multiple labeled data sets under varying ratios and lengths of full forenames, reflecting real-world scenarios in which an author is represented by forename variants (synonym) and some authors share the same forenames (homonym). The results show that increasing the ratios of full forenames substantially improves both heuristic and machine-learning-based disambiguation. Performance gains by algorithmic disambiguation are pronounced when many forenames are initialized or homonyms are prevalent. As the ratios of full forenames increase, however, they become marginal compared to those by string matching. Using a small portion of forename strings does not reduce much the performances of both heuristic and algorithmic disambiguation methods compared to using full-length strings. These findings provide practical suggestions, such as restoring initialized forenames into a full-string format via record linkage for improved disambiguation performances.
    Date
    11. 7.2020 13:22:58
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.7, S.839-855
    Type
    a
  6. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.02
    0.018081523 = product of:
      0.04520381 = sum of:
        0.005898632 = weight(_text_:a in 883) [ClassicSimilarity], result of:
          0.005898632 = score(doc=883,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 883, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=883)
        0.039305177 = sum of:
          0.007893822 = weight(_text_:information in 883) [ClassicSimilarity], result of:
            0.007893822 = score(doc=883,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 883, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=883)
          0.031411353 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
            0.031411353 = score(doc=883,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 883, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=883)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.168-185
    Type
    a
  7. Pooja, K.M.; Mondal, S.; Chandra, J.: ¬A graph combination with edge pruning-based approach for author name disambiguation (2020) 0.01
    0.0065874713 = product of:
      0.016468678 = sum of:
        0.009632425 = weight(_text_:a in 59) [ClassicSimilarity], result of:
          0.009632425 = score(doc=59,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 59, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=59)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 59) [ClassicSimilarity], result of:
              0.013672504 = score(doc=59,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 59, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=59)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Author name disambiguation (AND) is a challenging problem due to several issues such as missing key identifiers, same name corresponding to multiple authors, along with inconsistent representation. Several techniques have been proposed but maintaining consistent accuracy levels over all data sets is still a major challenge. We identify two major issues associated with the AND problem. First, the namesake problem in which two or more authors with the same name publishes in a similar domain. Second, the diverse topic problem in which one author publishes in diverse topical domains with a different set of coauthors. In this work, we initially propose a method named ATGEP for AND that addresses the namesake issue. We evaluate the performance of ATGEP using various ambiguous name references collected from the Arnetminer Citation (AC) and Web of Science (WoS) data set. We empirically show that the two aforementioned problems are crucial to address the AND problem that are difficult to handle using state-of-the-art techniques. To handle the diverse topic issue, we extend ATGEP to a new variant named ATGEP-web that considers external web information of the authors. Experiments show that with enough information available from external web sources ATGEP-web can significantly improve the results further compared with ATGEP.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.1, S.69-83
    Type
    a
  8. Díez Platas, M.L.; Muñoz, S.R.; González-Blanco, E.; Ruiz Fabo, P.; Álvarez Mellado, E.: Medieval Spanish (12th-15th centuries) named entity recognition and attribute annotation system based on contextual information (2021) 0.01
    0.0060712704 = product of:
      0.015178176 = sum of:
        0.008341924 = weight(_text_:a in 93) [ClassicSimilarity], result of:
          0.008341924 = score(doc=93,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 93, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=93)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 93) [ClassicSimilarity], result of:
              0.013672504 = score(doc=93,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 93, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=93)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The recognition of named entities in Spanish medieval texts presents great complexity, involving specific challenges: First, the complex morphosyntactic characteristics in proper-noun use in medieval texts. Second, the lack of strict orthographic standards. Finally, diachronic and geographical variations in Spanish from the 12th to 15th century. In this period, named entities usually appear as complex text structure. For example, it was frequent to add nicknames and information about the persons role in society and geographic origin. To tackle this complexity, named entity recognition and classification system has been implemented. The system uses contextual cues based on semantics to detect entities and assign a type. Given the occurrence of entities with attached attributes, entity contexts are also parsed to determine entity-type-specific dependencies for these attributes. Moreover, it uses a variant generator to handle the diachronic evolution of Spanish medieval terms from a phonetic and morphosyntactic viewpoint. The tool iteratively enriches its proper lexica, dictionaries, and gazetteers. The system was evaluated on a corpus of over 3,000 manually annotated entities of different types and periods, obtaining F1 scores between 0.74 and 0.87. Attribute annotation was evaluated for a person and role name attributes with an overall F1 of 0.75.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.2, S.224-238
    Type
    a
  9. Boruah, B.B.; Ravikumar, S.; Gayang, F.L.: Consistency, extent, and validation of the utilization of the MARC 21 bibliographic standard in the college libraries of Assam in India (2023) 0.01
    0.005822873 = product of:
      0.014557183 = sum of:
        0.0067426977 = weight(_text_:a in 1183) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=1183,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 1183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1183)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 1183) [ClassicSimilarity], result of:
              0.015628971 = score(doc=1183,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 1183, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1183)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper brings light to the existing practice of cataloging in the college libraries of Assam in terms of utilizing the MARC 21 standard and its structure, i.e., the tags, subfield codes, and indicators. Catalog records from six college libraries are collected and a survey is conducted to understand the local users' information requirements for the catalog. Places, where libraries have scope to improve and which divisions of tags could be more helpful for them in information retrieval, are identified and suggested. This study fulfilled the need for local-level assessment of the catalogs.
    Type
    a
  10. Perera, T.: Description specialists and inclusive description work and/or initiatives : an exploratory study (2022) 0.01
    0.005513504 = product of:
      0.01378376 = sum of:
        0.008258085 = weight(_text_:a in 974) [ClassicSimilarity], result of:
          0.008258085 = score(doc=974,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 974, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=974)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 974) [ClassicSimilarity], result of:
              0.011051352 = score(doc=974,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=974)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents preliminary findings from an exploratory research study investigating the education, Library and Information Science (LIS) work experiences, and demographics of description specialists engaging in inclusive description work and/or initiatives. Survey results represent participants' education background, LIS work experiences, motivations behind projects and initiatives, areas of work and types of project priorities, preferred outcomes, and challenges encountered while engaging in inclusive description work and/or initiatives. Findings also point to gaps in understanding related to cultural concepts. A participant-created definition for inclusive description is a successful outcome of the study.
    Type
    a
  11. Kyprianos, K.; Lolou, E.; Efthymiou, F.: Cataloging quality and the views of catalogers in Hellenic academic libraries (2022) 0.01
    0.0055105956 = product of:
      0.013776489 = sum of:
        0.007078358 = weight(_text_:a in 1146) [ClassicSimilarity], result of:
          0.007078358 = score(doc=1146,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 1146, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1146)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 1146) [ClassicSimilarity], result of:
              0.013396261 = score(doc=1146,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 1146, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1146)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This study focuses on cataloging quality and how it is defined by information professionals, specifically university library catalogers. Although there is no single and objective definition of 'cataloging quality,' research aims to specify its core characteristics. The goal is to define the modern cataloging environment, as well as the tools and opportunities it provides, and to improve the success of academic library services for both professional catalogers and users, who are the final consumers of the information. Regarding methodology, a sample survey was chosen. The survey results revealed that the quality of cataloging is determined by several factors, including technical features of the data, adherence to standards, the cataloging process, user satisfaction, and the development of a general quality culture.
    Type
    a
  12. Corbara, S.; Moreo, A.; Sebastiani, F.: Syllabic quantity patterns as rhythmic features for Latin authorship attribution (2023) 0.01
    0.0051638708 = product of:
      0.012909677 = sum of:
        0.008173384 = weight(_text_:a in 846) [ClassicSimilarity], result of:
          0.008173384 = score(doc=846,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 846, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=846)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 846) [ClassicSimilarity], result of:
              0.009472587 = score(doc=846,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=846)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    It is well known that, within the Latin production of written text, peculiar metric schemes were followed not only in poetic compositions, but also in many prose works. Such metric patterns were based on so-called syllabic quantity, that is, on the length of the involved syllables, and there is substantial evidence suggesting that certain authors had a preference for certain metric patterns over others. In this research we investigate the possibility to employ syllabic quantity as a base for deriving rhythmic features for the task of computational authorship attribution of Latin prose texts. We test the impact of these features on the authorship attribution task when combined with other topic-agnostic features. Our experiments, carried out on three different datasets using support vector machines (SVMs) show that rhythmic features based on syllabic quantity are beneficial in discriminating among Latin prose authors.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.1, S.128-141
    Type
    a
  13. Zakaria, M.S.: Measuring typographical errors in online catalogs of academic libraries using Ballard's list : a case study from Egypt (2023) 0.00
    0.0049571716 = product of:
      0.012392929 = sum of:
        0.0068111527 = weight(_text_:a in 1184) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1184,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1184, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 1184) [ClassicSimilarity], result of:
              0.011163551 = score(doc=1184,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 1184, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Typographical errors in bibliographic records of online library catalogs are a common troublesome phenomenon, spread all over the world. They can affect the retrieval and identification of items in information retrieval systems and thus prevent users from finding the documents they need. The present study was conducted to measure typographical errors in the online catalog of the Egyptian Universities Libraries Consortium (EULC). The investigation depended on Terry Ballard's typographical error terms list. The EULC catalog was searched to identify matched erroneous records. The study found that the total number of erroneous records reached 1686, whereas the mean error rate for each record is 11.24, which is very high. About 396 erroneous records (23.49%) have been retrieved from Section C of Ballard's list (Moderate Probability). The typographical errors found within the abstracts of the study's sample records represented 35.82%. Omissions were the first common type of errors with 54.51%, followed by transpositions at 17.08%. Regarding the analysis of parts of speech, the study found that 63.46% of errors occur in noun terms. The results of the study indicated that typographical errors still pose a serious challenge for information retrieval systems, especially for library systems in the Arab environment. The study proposes some solutions for Egyptian university libraries in order to avoid typographic mistakes in the future.
    Type
    a
  14. Kyprianos, K.; Efthymiou, F.; Kouis, D.: Students' perceptions on cataloging course (2022) 0.00
    0.004915534 = product of:
      0.012288835 = sum of:
        0.008341924 = weight(_text_:a in 623) [ClassicSimilarity], result of:
          0.008341924 = score(doc=623,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 623, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=623)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 623) [ClassicSimilarity], result of:
              0.007893822 = score(doc=623,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=623)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Cataloging and metadata description is one of the major competencies that a trainee cataloger must conquer. According to recent research results, library and information studies students experience difficulties understanding the theory, the terminology, and the tools necessary for cataloging. The experimental application of teaching models which derive from predominant learning theories, such as behaviorism, cognitivism, and constructivism, may help in detecting the difficulties of a cataloging course and in suggesting efficient solutions. This paper presents in detail three teaching models applied for a cataloging course and investigates their effectiveness, based on a survey of 126 first-year students. The survey employed the Kirkpatrick model aiming to record undergraduate students' perceptions and feelings about cataloging. The results revealed that, although a positive change in students' behavior towards cataloging has been achieved, they still do not feel very confident about the skills they have acquired. Moreover, students felt that practicing cataloging more frequently will eliminate their difficulties. Finally, they emphasized the need for face to face courses, as the survey took place in the coronavirus pandemic, during which the courses were held via distance learning.
    Type
    a
  15. Hjoerland, B.: Bibliographical control (2023) 0.00
    0.004624805 = product of:
      0.011562012 = sum of:
        0.0076151006 = weight(_text_:a in 1131) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=1131,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 1131, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1131)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 1131) [ClassicSimilarity], result of:
              0.007893822 = score(doc=1131,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 1131, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1131)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Section 1 of this article discusses the concept of bibliographical control and makes a distinction between this term, "bibliographical description," and related terms, which are often confused in the literature. It further discusses the function of bibliographical control and criticizes Patrick Wilson's distinction between "exploitative control" and "descriptive control." Section 2 presents projects for establishing bibliographic control from the Library of Alexandria to the Internet and Google, and it is found that these projects have often been dominated by a positivist dream to make all information in the world available to everybody. Section 3 discusses the theoretical problems of providing comprehensive coverage and retrieving documents represented in databases and argues that 100% coverage and retrievability is an unobtainable ideal. It is shown that bibliographical control has been taken very seriously in the field of medicine, where knowledge of the most important findings is of utmost importance. In principle, it is equally important in all other domains. The conclusion states that the alternative to a positivist dream of complete bibliographic control is a pragmatic philosophy aiming at optimizing bibliographic control supporting specific activities, perspectives, and interests.
    Type
    a
  16. ¬The library's guide to graphic novels (2020) 0.00
    0.0039658197 = product of:
      0.009914549 = sum of:
        0.007151711 = weight(_text_:a in 717) [ClassicSimilarity], result of:
          0.007151711 = score(doc=717,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13376464 = fieldWeight in 717, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=717)
        0.002762838 = product of:
          0.005525676 = sum of:
            0.005525676 = weight(_text_:information in 717) [ClassicSimilarity], result of:
              0.005525676 = score(doc=717,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.06788416 = fieldWeight in 717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=717)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The circ stats say it all: graphic novels' popularity among library users keeps growing, with more being published (and acquired by libraries) each year. The unique challenges of developing and managing a graphics novels collection have led the Association of Library Collections and Technical Services (ALCTS) to craft this guide, presented under the expert supervision of editor Ballestro, who has worked with comics for more than 35 years. Examining the ever-changing ways that graphic novels are created, packaged, marketed, and released, this resource gathers a range of voices from the field to explore such topics as: a cultural history of comics and graphic novels from their World War II origins to today, providing a solid grounding for newbies and fresh insights for all; catching up on the Big Two's reboots: Marvel's 10 and DC's 4; five questions to ask when evaluating nonfiction graphic novels and 30 picks for a core collection; key publishers and cartoonists to consider when adding international titles; developing a collection that supports curriculum and faculty outreach to ensure wide usage, with catalogers' tips for organizing your collection and improving discovery; real-world examples of how libraries treat graphic novels, such as an in-depth profile of the development of Penn Library's Manga collection; how to integrate the emerging field of graphic medicine into the collection; and specialized resources like The Cartoonists of Color and Queer Cartoonists databases, the open access scholarly journal Comic Grid, and the No Flying, No Tights website. Packed with expert guidance and useful information, this guide will assist technical services staff, catalogers, and acquisition and collection management librarians.
    Content
    Inhalt: Between the Panels: A Cultural History of Comic Books and Graphic Novels / by Joshua Everett -- Graphic Novel Companies, Reboots, and Numbering / by John Ballestro -- Creating and Developing a Graphic Literature Collection in an Academic Library / by Andrea Kingston -- Non-Fiction Graphic Novels / by Carli Spina -- Fiction Graphic Novels / by Kayla Kuni -- International Comics and Graphic Novels / by Emily Drew, Lucia Serantes, and Amie Wright -- Building a Japanese Manga Collection for Non-Traditional Patrons in an Academic Library / by Molly Desjardins and Michael P. Williams -- Graphic Medicine in Your Library: Ideas and Strategies for Collecting Comics about Healthcare / by Alice Jaggers, Matthew Noe, and Ariel Pomputius -- The Nuts and Bolts of Comics Cataloging / by Allison Bailund, Hallie Clawson, and Staci Crouch -- Teaching and Programming with Graphic Novels in Academic Libraries / by Jacob Gordon and Sarah Kern.
  17. Kim, J.; Kim, J.; Owen-Smith, J.: Ethnicity-based name partitioning for author name disambiguation using supervised machine learning (2021) 0.00
    0.0035052493 = product of:
      0.008763123 = sum of:
        0.0048162127 = weight(_text_:a in 311) [ClassicSimilarity], result of:
          0.0048162127 = score(doc=311,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.090081796 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=311)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 311) [ClassicSimilarity], result of:
              0.007893822 = score(doc=311,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=311)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In several author name disambiguation studies, some ethnic name groups such as East Asian names are reported to be more difficult to disambiguate than others. This implies that disambiguation approaches might be improved if ethnic name groups are distinguished before disambiguation. We explore the potential of ethnic name partitioning by comparing performance of four machine learning algorithms trained and tested on the entire data or specifically on individual name groups. Results show that ethnicity-based name partitioning can substantially improve disambiguation performance because the individual models are better suited for their respective name group. The improvements occur across all ethnic name groups with different magnitudes. Performance gains in predicting matched name pairs outweigh losses in predicting nonmatched pairs. Feature (e.g., coauthor name) similarities of name pairs vary across ethnic name groups. Such differences may enable the development of ethnicity-specific feature weights to improve prediction for specific ethic name categories. These findings are observed for three labeled data with a natural distribution of problem sizes as well as one in which all ethnic name groups are controlled for the same sizes of ambiguous names. This study is expected to motive scholars to group author names based on ethnicity prior to disambiguation.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.8, S.979-994
    Type
    a
  18. Oliver, C: Introducing RDA : a guide to the basics after 3R (2021) 0.00
    0.002940995 = product of:
      0.007352487 = sum of:
        0.0034055763 = weight(_text_:a in 716) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=716,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=716)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 716) [ClassicSimilarity], result of:
              0.007893822 = score(doc=716,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 716, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=716)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Since Oliver's guide was first published in 2010, thousands of LIS students, records managers, and catalogers and other library professionals have relied on its clear, plainspoken explanation of RDA: Resource Description and Access as their first step towards becoming acquainted with the cataloging standard. Now, reflecting the changes to RDA after the completion of the 3R Project, Oliver brings her Special Report up to date. This essential primer concisely explains what RDA is, its basic features, and the main factors in its development describes RDA's relationship to the international standards and models that continue to influence its evolution provides an overview of the latest developments, focusing on the impact of the 3R Project, the results of aligning RDA with IFLA's Library Reference Model (LRM), and the outcomes of internationalization illustrates how information is organized in the post 3R Toolkit and explains how to navigate through this new structure; and discusses how RDA continues to enable improved resource discovery both in traditional and new applications, including the linked data environment.
  19. Haider, S.: Library cataloging, classification, and metadata research : a bibliography of doctoral dissertations - a supplement, 1982-2020Salman (2021) 0.00
    0.0028313433 = product of:
      0.014156716 = sum of:
        0.014156716 = weight(_text_:a in 674) [ClassicSimilarity], result of:
          0.014156716 = score(doc=674,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.26478532 = fieldWeight in 674, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=674)
      0.2 = coord(1/5)
    
    Type
    a
  20. Serra, L.G.; Schneider, J.A.; Santarém Segundo, J.E.: Person identifiers in MARC 21 records in a semantic environment (2020) 0.00
    0.0023357389 = product of:
      0.011678694 = sum of:
        0.011678694 = weight(_text_:a in 127) [ClassicSimilarity], result of:
          0.011678694 = score(doc=127,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21843673 = fieldWeight in 127, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=127)
      0.2 = coord(1/5)
    
    Abstract
    This article discusses how libraries can include person identifiers in the MARC format. It suggests using URIs in fields and subfields to help transition the data to an RDF model, and to help prepare the catalog for a Linked Data. It analyzes the selection of URIs and Real-World Objects, and the use of tag 024 to describe person identifiers in authority records. When a creator or collaborator is identified in a work, the identifiers are transferred from authority to the bibliographic record. The article concludes that URI-based descriptions can provide a better experience for users, offering other methods of discovery.
    Type
    a

Authors

Languages

  • e 53
  • d 8

Types

  • a 57
  • el 6
  • m 2
  • More… Less…

Classifications