Search (275 results, page 1 of 14)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.28
    0.27713922 = product of:
      0.73903793 = sum of:
        0.050968137 = product of:
          0.1529044 = sum of:
            0.1529044 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.1529044 = score(doc=862,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.1529044 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1529044 = score(doc=862,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.1529044 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1529044 = score(doc=862,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.0764522 = product of:
          0.1529044 = sum of:
            0.1529044 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.1529044 = score(doc=862,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.5 = coord(1/2)
        0.1529044 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1529044 = score(doc=862,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.1529044 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1529044 = score(doc=862,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.375 = coord(6/16)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.23
    0.2309494 = product of:
      0.61586505 = sum of:
        0.04247345 = product of:
          0.12742035 = sum of:
            0.12742035 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.12742035 = score(doc=1000,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.12742035 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12742035 = score(doc=1000,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12742035 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12742035 = score(doc=1000,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.063710175 = product of:
          0.12742035 = sum of:
            0.12742035 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.12742035 = score(doc=1000,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
        0.12742035 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12742035 = score(doc=1000,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12742035 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12742035 = score(doc=1000,freq=2.0), product of:
            0.27206317 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032090448 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.375 = coord(6/16)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Diken, T.: Cataloging psychological tests in an academic library (2021) 0.12
    0.12095396 = product of:
      0.3225439 = sum of:
        0.049064454 = weight(_text_:cataloguing in 715) [ClassicSimilarity], result of:
          0.049064454 = score(doc=715,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.34387225 = fieldWeight in 715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=715)
        0.031475607 = product of:
          0.062951215 = sum of:
            0.062951215 = weight(_text_:rules in 715) [ClassicSimilarity], result of:
              0.062951215 = score(doc=715,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.38950738 = fieldWeight in 715, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=715)
          0.5 = coord(1/2)
        0.1011396 = weight(_text_:anglo in 715) [ClassicSimilarity], result of:
          0.1011396 = score(doc=715,freq=2.0), product of:
            0.20485519 = queryWeight, product of:
              6.3836813 = idf(docFreq=202, maxDocs=44218)
              0.032090448 = queryNorm
            0.49371263 = fieldWeight in 715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.3836813 = idf(docFreq=202, maxDocs=44218)
              0.0546875 = fieldNorm(doc=715)
        0.028848568 = weight(_text_:american in 715) [ClassicSimilarity], result of:
          0.028848568 = score(doc=715,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.26367915 = fieldWeight in 715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.0546875 = fieldNorm(doc=715)
        0.049064454 = weight(_text_:cataloguing in 715) [ClassicSimilarity], result of:
          0.049064454 = score(doc=715,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.34387225 = fieldWeight in 715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=715)
        0.062951215 = weight(_text_:rules in 715) [ClassicSimilarity], result of:
          0.062951215 = score(doc=715,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.38950738 = fieldWeight in 715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0546875 = fieldNorm(doc=715)
      0.375 = coord(6/16)
    
    Abstract
    Often relegated to a side note in conversations about curriculum materials collections, psychological tests deserve their own consideration in library cataloging. Libraries that are dedicated to psychology (or psychology and a related field, such as education) lend psychological tests either for reference or for usage in clinical training programs. These libraries, largely academic, have a need for guidelines regarding the cataloging of psychological tests, as those developed under the Anglo-American Cataloguing Rules, second edition (AACR2) are no longer satisfactory for Resource Description and Access (RDA) cataloging. This paper provides an overview of AACR2 cataloging guidelines and proposes new RDA best practices when cataloging psychological assessments, including kits.
  4. Miksa, S.D.: Cataloging principles and objectives : history and development (2021) 0.10
    0.09772593 = product of:
      0.31272298 = sum of:
        0.06673796 = weight(_text_:descriptive in 702) [ClassicSimilarity], result of:
          0.06673796 = score(doc=702,freq=2.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.3713015 = fieldWeight in 702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.046875 = fieldNorm(doc=702)
        0.042055245 = weight(_text_:cataloguing in 702) [ClassicSimilarity], result of:
          0.042055245 = score(doc=702,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.29474765 = fieldWeight in 702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.046875 = fieldNorm(doc=702)
        0.053958185 = product of:
          0.10791637 = sum of:
            0.10791637 = weight(_text_:rules in 702) [ClassicSimilarity], result of:
              0.10791637 = score(doc=702,freq=8.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.66772693 = fieldWeight in 702, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=702)
          0.5 = coord(1/2)
        0.042055245 = weight(_text_:cataloguing in 702) [ClassicSimilarity], result of:
          0.042055245 = score(doc=702,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.29474765 = fieldWeight in 702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.046875 = fieldNorm(doc=702)
        0.10791637 = weight(_text_:rules in 702) [ClassicSimilarity], result of:
          0.10791637 = score(doc=702,freq=8.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.66772693 = fieldWeight in 702, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=702)
      0.3125 = coord(5/16)
    
    Abstract
    Cataloging principles and objectives guide the formation of cataloging rules governing the organization of information within the library catalog, as well as the function of the catalog itself. Changes in technologies wrought by the internet and the web have been the driving forces behind shifting cataloging practice and reconfigurations of cataloging rules. Modern cataloging principles and objectives started in 1841 with the creation of Panizzi's 91 Rules for the British Museum and gained momentum with Charles Cutter's Rules for Descriptive Cataloging (1904). The first Statement of International Cataloguing Principles (ICP) was adopted in 1961, holding their place through such codifications as AACR and AACR2 in the 1970s and 1980s. Revisions accelerated starting in 2003 with the three original FR models. The Library Reference Model (LRM) in 2017 acted as a catalyst for the evolution of principles and objectives culminating in the creation of Resource Description and Access (RDA) in 2013.
  5. Handis, M.W.: Greek subject and name authorities, and the Library of Congress (2020) 0.09
    0.09455973 = product of:
      0.30259115 = sum of:
        0.031475607 = product of:
          0.062951215 = sum of:
            0.062951215 = weight(_text_:rules in 5801) [ClassicSimilarity], result of:
              0.062951215 = score(doc=5801,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.38950738 = fieldWeight in 5801, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5801)
          0.5 = coord(1/2)
        0.1011396 = weight(_text_:anglo in 5801) [ClassicSimilarity], result of:
          0.1011396 = score(doc=5801,freq=2.0), product of:
            0.20485519 = queryWeight, product of:
              6.3836813 = idf(docFreq=202, maxDocs=44218)
              0.032090448 = queryNorm
            0.49371263 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.3836813 = idf(docFreq=202, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5801)
        0.028848568 = weight(_text_:american in 5801) [ClassicSimilarity], result of:
          0.028848568 = score(doc=5801,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.26367915 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5801)
        0.062951215 = weight(_text_:rules in 5801) [ClassicSimilarity], result of:
          0.062951215 = score(doc=5801,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.38950738 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5801)
        0.078176126 = weight(_text_:2nd in 5801) [ClassicSimilarity], result of:
          0.078176126 = score(doc=5801,freq=2.0), product of:
            0.18010403 = queryWeight, product of:
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.032090448 = queryNorm
            0.43406096 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5801)
      0.3125 = coord(5/16)
    
    Abstract
    Some international libraries are still using the Anglo-American Cataloging Rules, 2nd edition revised, for cataloging even though the Library of Congress and other large libraries have retired it in favor of Resource Description and Access. One of these libraries is the National Library of Greece, which consults the Library of Congress database before establishing authorities. There are cultural differences in names and subjects between the Library of Congress and the National Library, but some National Library terms may be more appropriate for users than the Library of Congress-established forms.
  6. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.04
    0.035497397 = product of:
      0.18931946 = sum of:
        0.05948331 = product of:
          0.11896662 = sum of:
            0.11896662 = weight(_text_:rules in 1171) [ClassicSimilarity], result of:
              0.11896662 = score(doc=1171,freq=14.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.7360998 = fieldWeight in 1171, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
        0.11896662 = weight(_text_:rules in 1171) [ClassicSimilarity], result of:
          0.11896662 = score(doc=1171,freq=14.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.7360998 = fieldWeight in 1171, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
        0.010869532 = product of:
          0.021739064 = sum of:
            0.021739064 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.021739064 = score(doc=1171,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
  7. Yon, A.; Willey, E.: Using the Cataloguing Code of Ethics principles for a retrospective project analysis (2022) 0.03
    0.031429462 = product of:
      0.16762379 = sum of:
        0.069387615 = weight(_text_:cataloguing in 729) [ClassicSimilarity], result of:
          0.069387615 = score(doc=729,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.4863088 = fieldWeight in 729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=729)
        0.028848568 = weight(_text_:american in 729) [ClassicSimilarity], result of:
          0.028848568 = score(doc=729,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.26367915 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.0546875 = fieldNorm(doc=729)
        0.069387615 = weight(_text_:cataloguing in 729) [ClassicSimilarity], result of:
          0.069387615 = score(doc=729,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.4863088 = fieldWeight in 729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=729)
      0.1875 = coord(3/16)
    
    Abstract
    This study uses the recently released Cataloguing Code of Ethics to evaluate a project which explored how to ethically, efficiently, and accurately add demographic terms for African-American authors to catalog records. By reviewing the project through the lens of these principles the authors were able to examine how their practice was ethical in some ways but could have been improved in others. This helped them identify areas of potential improvement in their current and future research and practice and explore ethical difficulties in cataloging resources with records that are used globally, especially in a linked data environment.
  8. Oliver, C: Introducing RDA : a guide to the basics after 3R (2021) 0.03
    0.027318727 = product of:
      0.14569987 = sum of:
        0.07865144 = weight(_text_:descriptive in 716) [ClassicSimilarity], result of:
          0.07865144 = score(doc=716,freq=4.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.43758303 = fieldWeight in 716, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.0390625 = fieldNorm(doc=716)
        0.055840094 = weight(_text_:2nd in 716) [ClassicSimilarity], result of:
          0.055840094 = score(doc=716,freq=2.0), product of:
            0.18010403 = queryWeight, product of:
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.032090448 = queryNorm
            0.31004354 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.0390625 = fieldNorm(doc=716)
        0.011208346 = product of:
          0.022416692 = sum of:
            0.022416692 = weight(_text_:ed in 716) [ClassicSimilarity], result of:
              0.022416692 = score(doc=716,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.19644247 = fieldWeight in 716, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=716)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Issue
    2nd ed.
    LCSH
    Descriptive cataloging / Standards
    Subject
    Descriptive cataloging / Standards
  9. Danskin, A.: ¬The Anglo-American Authority File : a PCC story (2020) 0.02
    0.02297888 = product of:
      0.18383104 = sum of:
        0.143033 = weight(_text_:anglo in 121) [ClassicSimilarity], result of:
          0.143033 = score(doc=121,freq=4.0), product of:
            0.20485519 = queryWeight, product of:
              6.3836813 = idf(docFreq=202, maxDocs=44218)
              0.032090448 = queryNorm
            0.6982151 = fieldWeight in 121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.3836813 = idf(docFreq=202, maxDocs=44218)
              0.0546875 = fieldNorm(doc=121)
        0.040798035 = weight(_text_:american in 121) [ClassicSimilarity], result of:
          0.040798035 = score(doc=121,freq=4.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.3728986 = fieldWeight in 121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.0546875 = fieldNorm(doc=121)
      0.125 = coord(2/16)
    
    Abstract
    This article examines the motivations for the collaboration between the British Library and Library of Congress to develop a joint (Anglo-American) authority file. It describes the obstacles that had to be overcome for the British Library to become a Name Authority Cooperative (NACO) "copy holder", or node. It considers the contribution the British Library made to NACO, the benefits it has derived from participation in Program for Cooperative Cataloging (PCC), and concludes by looking ahead to the next 25 years.
  10. Gartner, R.: Metadata in the digital library : building an integrated strategy with XML (2021) 0.02
    0.018424764 = product of:
      0.09826541 = sum of:
        0.057796773 = weight(_text_:descriptive in 732) [ClassicSimilarity], result of:
          0.057796773 = score(doc=732,freq=6.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.32155657 = fieldWeight in 732, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
        0.013489546 = product of:
          0.026979093 = sum of:
            0.026979093 = weight(_text_:rules in 732) [ClassicSimilarity], result of:
              0.026979093 = score(doc=732,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.16693173 = fieldWeight in 732, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=732)
          0.5 = coord(1/2)
        0.026979093 = weight(_text_:rules in 732) [ClassicSimilarity], result of:
          0.026979093 = score(doc=732,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.16693173 = fieldWeight in 732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
      0.1875 = coord(3/16)
    
    Abstract
    The range of metadata needed to run a digital library and preserve its collections in the long term is much more extensive and complicated than anything in its traditional counterpart. It includes the same 'descriptive' information which guides users to the resources they require but must supplement this with comprehensive 'administrative' metadata: this encompasses technical details of the files that make up its collections, the documentation of complex intellectual property rights and the extensive set needed to support its preservation in the long-term. To accommodate all of this requires the use of multiple metadata standards, all of which have to be brought together into a single integrated whole.
    Metadata in the Digital Library is a complete guide to building a digital library metadata strategy from scratch, using established metadata standards bound together by the markup language XML. The book introduces the reader to the theory of metadata and shows how it can be applied in practice. It lays out the basic principles that should underlie any metadata strategy, including its relation to such fundamentals as the digital curation lifecycle, and demonstrates how they should be put into effect. It introduces the XML language and the key standards for each type of metadata, including Dublin Core and MODS for descriptive metadata and PREMIS for its administrative and preservation counterpart. Finally, the book shows how these can all be integrated using the packaging standard METS. Two case studies from the Warburg Institute in London show how the strategy can be implemented in a working environment. The strategy laid out in this book will ensure that a digital library's metadata will support all of its operations, be fully interoperable with others and enable its long-term preservation. It assumes no prior knowledge of metadata, XML or any of the standards that it covers. It provides both an introduction to best practices in digital library metadata and a manual for their practical implementation.
    Content
    Inhalt: 1 Introduction, Aims and Definitions -- 1.1 Origins -- 1.2 From information science to libraries -- 1.3 The central place of metadata -- 1.4 The book in outline -- 2 Metadata Basics -- 2.1 Introduction -- 2.2 Three types of metadata -- 2.2.1 Descriptive metadata -- 2.2.2 Administrative metadata -- 2.2.3 Structural metadata -- 2.3 The core components of metadata -- 2.3.1 Syntax -- 2.3.2 Semantics -- 2.3.3 Content rules -- 2.4 Metadata standards -- 2.5 Conclusion -- 3 Planning a Metadata Strategy: Basic Principles -- 3.1 Introduction -- 3.2 Principle 1: Support all stages of the digital curation lifecycle -- 3.3 Principle 2: Support the long-term preservation of the digital object -- 3.4 Principle 3: Ensure interoperability -- 3.5 Principle 4: Control metadata content wherever possible -- 3.6 Principle 5: Ensure software independence -- 3.7 Principle 6: Impose a logical system of identifiers -- 3.8 Principle 7: Use standards whenever possible -- 3.9 Principle 8: Ensure the integrity of the metadata itself -- 3.10 Summary: the basic principles of a metadata strategy -- 4 Planning a Metadata Strategy: Applying the Basic Principles -- 4.1 Introduction -- 4.2 Initial steps: standards as a foundation -- 4.2.1 'Off-the shelf' standards -- 4.2.2 Mapping out an architecture and serialising it into a standard -- 4.2.3 Devising a local metadata scheme -- 4.2.4 How standards support the basic principles -- 4.3 Identifiers: everything in its place -- 5 XML: The Syntactical Foundation of Metadata -- 5.1 Introduction -- 5.2 What XML looks like -- 5.3 XML schemas -- 5.4 Namespaces -- 5.5 Creating and editing XML -- 5.6 Transforming XML -- 5.7 Why use XML? -- 6 METS: The Metadata Package -- 6.1 Introduction -- 6.2 Why use METS?.
  11. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.02
    0.018216362 = product of:
      0.09715393 = sum of:
        0.042055245 = weight(_text_:cataloguing in 5996) [ClassicSimilarity], result of:
          0.042055245 = score(doc=5996,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.29474765 = fieldWeight in 5996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.046875 = fieldNorm(doc=5996)
        0.042055245 = weight(_text_:cataloguing in 5996) [ClassicSimilarity], result of:
          0.042055245 = score(doc=5996,freq=2.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.29474765 = fieldWeight in 5996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.046875 = fieldNorm(doc=5996)
        0.013043438 = product of:
          0.026086876 = sum of:
            0.026086876 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
              0.026086876 = score(doc=5996,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.23214069 = fieldWeight in 5996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5996)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  12. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.02
    0.017621385 = product of:
      0.09398072 = sum of:
        0.026979093 = product of:
          0.053958185 = sum of:
            0.053958185 = weight(_text_:rules in 1181) [ClassicSimilarity], result of:
              0.053958185 = score(doc=1181,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.33386347 = fieldWeight in 1181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1181)
          0.5 = coord(1/2)
        0.053958185 = weight(_text_:rules in 1181) [ClassicSimilarity], result of:
          0.053958185 = score(doc=1181,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.33386347 = fieldWeight in 1181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=1181)
        0.013043438 = product of:
          0.026086876 = sum of:
            0.026086876 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
              0.026086876 = score(doc=1181,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.23214069 = fieldWeight in 1181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1181)
          0.5 = coord(1/2)
      0.1875 = coord(3/16)
    
    Abstract
    The Nuovo soggettario, the official Italian subject indexing system edited by the National Central Library of Florence, is made up of interactive components, the core of which is a general thesaurus and some rules of a conventional syntax for subject string construction. The Nuovo soggettario Thesaurus is in compliance with ISO 25964: 2011-2013, IFLA LRM, and FAIR principle (findability, accessibility, interoperability, and reusability). Its open data are available in the Zthes, MARC21, and in SKOS formats and allow for interoperability with l library, archive, and museum databases. The Thesaurus's macrostructure is organized into four fundamental macro-categories, thirteen categories, and facets. The facets allow for the orderly development of hierarchies, thereby limiting polyhierarchies and promoting the grouping of homogenous concepts. This paper addresses the main features and peculiarities which have characterized the consistent development of this categorical structure and its effects on the syntactic sphere in a predominantly pre-coordinated usage context.
    Date
    26.11.2023 18:59:22
  13. Wei, W.; Liu, Y.-P.; Wei, L-R.: Feature-level sentiment analysis based on rules and fine-grained domain ontology (2020) 0.02
    0.017523436 = product of:
      0.14018749 = sum of:
        0.046729162 = product of:
          0.093458325 = sum of:
            0.093458325 = weight(_text_:rules in 5876) [ClassicSimilarity], result of:
              0.093458325 = score(doc=5876,freq=6.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.5782685 = fieldWeight in 5876, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5876)
          0.5 = coord(1/2)
        0.093458325 = weight(_text_:rules in 5876) [ClassicSimilarity], result of:
          0.093458325 = score(doc=5876,freq=6.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.5782685 = fieldWeight in 5876, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
      0.125 = coord(2/16)
    
    Abstract
    Mining product reviews and sentiment analysis are of great significance, whether for academic research purposes or optimizing business strategies. We propose a feature-level sentiment analysis framework based on rules parsing and fine-grained domain ontology for Chinese reviews. Fine-grained ontology is used to describe synonymous expressions of product features, which are reflected in word changes in online reviews. First, a semiautomatic construction method is developed by using Word2Vec for fine-grained ontology. Then, featurelevel sentiment analysis that combines rules parsing and the fine-grained domain ontology is conducted to extract explicit and implicit features from product reviews. Finally, the domain sentiment dictionary and context sentiment dictionary are established to identify sentiment polarities for the extracted feature-sentiment combinations. An experiment is conducted on the basis of product reviews crawled from Chinese e-commerce websites. The results demonstrate the effectiveness of our approach.
  14. Chan, M.; Daniels, J.; Furger, S.; Rasmussen, D.; Shoemaker, E.; Snow, K.: ¬The development and future of the cataloguing code of ethics (2022) 0.02
    0.017346904 = product of:
      0.13877523 = sum of:
        0.069387615 = weight(_text_:cataloguing in 1149) [ClassicSimilarity], result of:
          0.069387615 = score(doc=1149,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.4863088 = fieldWeight in 1149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1149)
        0.069387615 = weight(_text_:cataloguing in 1149) [ClassicSimilarity], result of:
          0.069387615 = score(doc=1149,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.4863088 = fieldWeight in 1149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1149)
      0.125 = coord(2/16)
    
    Abstract
    The Cataloguing Code of Ethics, released in January 2021, was the product of a multi-national, multi-year endeavor by the Cataloging Ethics Steering Committee to create a useful framework for the discussion of cataloging ethics. The six Cataloging Ethics Steering Committee members, based in the United States, United Kingdom, and Canada, recount the efforts of the group and the cataloging community leading up to the release of the Code, as well as provide their thoughts on the challenges of creating the document, lessons learned, and the future of the Code.
  15. Pankowski, T.: Ontological databases with faceted queries (2022) 0.01
    0.014602864 = product of:
      0.11682291 = sum of:
        0.03894097 = product of:
          0.07788194 = sum of:
            0.07788194 = weight(_text_:rules in 666) [ClassicSimilarity], result of:
              0.07788194 = score(doc=666,freq=6.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.48189044 = fieldWeight in 666, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=666)
          0.5 = coord(1/2)
        0.07788194 = weight(_text_:rules in 666) [ClassicSimilarity], result of:
          0.07788194 = score(doc=666,freq=6.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.48189044 = fieldWeight in 666, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0390625 = fieldNorm(doc=666)
      0.125 = coord(2/16)
    
    Abstract
    The success of the use of ontology-based systems depends on efficient and user-friendly methods of formulating queries against the ontology. We propose a method to query a class of ontologies, called facet ontologies ( fac-ontologies ), using a faceted human-oriented approach. A fac-ontology has two important features: (a) a hierarchical view of it can be defined as a nested facet over this ontology and the view can be used as a faceted interface to create queries and to explore the ontology; (b) the ontology can be converted into an ontological database , the ABox of which is stored in a database, and the faceted queries are evaluated against this database. We show that the proposed faceted interface makes it possible to formulate queries that are semantically equivalent to $${\mathcal {SROIQ}}^{Fac}$$ SROIQ Fac , a limited version of the $${\mathcal {SROIQ}}$$ SROIQ description logic. The TBox of a fac-ontology is divided into a set of rules defining intensional predicates and a set of constraint rules to be satisfied by the database. We identify a class of so-called reflexive weak cycles in a set of constraint rules and propose a method to deal with them in the chase procedure. The considerations are illustrated with solutions implemented in the DAFO system ( data access based on faceted queries over ontologies ).
  16. Kim, J.(im); Kim, J.(enna): Effect of forename string on author name disambiguation (2020) 0.01
    0.013994054 = product of:
      0.11195243 = sum of:
        0.1010829 = weight(_text_:author in 5930) [ClassicSimilarity], result of:
          0.1010829 = score(doc=5930,freq=12.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.65286934 = fieldWeight in 5930, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5930)
        0.010869532 = product of:
          0.021739064 = sum of:
            0.021739064 = weight(_text_:22 in 5930) [ClassicSimilarity], result of:
              0.021739064 = score(doc=5930,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.19345059 = fieldWeight in 5930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5930)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    In author name disambiguation, author forenames are used to decide which name instances are disambiguated together and how much they are likely to refer to the same author. Despite such a crucial role of forenames, their effect on the performance of heuristic (string matching) and algorithmic disambiguation is not well understood. This study assesses the contributions of forenames in author name disambiguation using multiple labeled data sets under varying ratios and lengths of full forenames, reflecting real-world scenarios in which an author is represented by forename variants (synonym) and some authors share the same forenames (homonym). The results show that increasing the ratios of full forenames substantially improves both heuristic and machine-learning-based disambiguation. Performance gains by algorithmic disambiguation are pronounced when many forenames are initialized or homonyms are prevalent. As the ratios of full forenames increase, however, they become marginal compared to those by string matching. Using a small portion of forename strings does not reduce much the performances of both heuristic and algorithmic disambiguation methods compared to using full-length strings. These findings provide practical suggestions, such as restoring initialized forenames into a full-string format via record linkage for improved disambiguation performances.
    Date
    11. 7.2020 13:22:58
  17. Hudon, M.: ¬The status of knowledge organization in library and information science master's programs (2021) 0.01
    0.01333869 = product of:
      0.10670952 = sum of:
        0.07786095 = weight(_text_:descriptive in 697) [ClassicSimilarity], result of:
          0.07786095 = score(doc=697,freq=2.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.4331851 = fieldWeight in 697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.0546875 = fieldNorm(doc=697)
        0.028848568 = weight(_text_:american in 697) [ClassicSimilarity], result of:
          0.028848568 = score(doc=697,freq=2.0), product of:
            0.10940785 = queryWeight, product of:
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.032090448 = queryNorm
            0.26367915 = fieldWeight in 697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4093587 = idf(docFreq=3973, maxDocs=44218)
              0.0546875 = fieldNorm(doc=697)
      0.125 = coord(2/16)
    
    Abstract
    The content of master's programs accredited by the American Library Association was examined to assess the status of knowledge organization (KO) as a subject in current training. Data collected show that KO remains very visible in a majority of programs, mainly in the form of required and electives courses focusing on descriptive cataloging, classification, and metadata. Observed tendencies include, however, the recent elimination of the required KO course in several programs, the reality that one third of KO electives listed in course catalogs have not been scheduled in the past three years, and the fact that two-thirds of those teaching KO specialize in other areas of information science.
  18. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.01
    0.013272953 = product of:
      0.106183626 = sum of:
        0.04247345 = product of:
          0.12742035 = sum of:
            0.12742035 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.12742035 = score(doc=5669,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.063710175 = product of:
          0.12742035 = sum of:
            0.12742035 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.12742035 = score(doc=5669,freq=2.0), product of:
                0.27206317 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032090448 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  19. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.01
    0.012893147 = product of:
      0.103145175 = sum of:
        0.09227564 = weight(_text_:author in 883) [ClassicSimilarity], result of:
          0.09227564 = score(doc=883,freq=10.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.5959855 = fieldWeight in 883, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=883)
        0.010869532 = product of:
          0.021739064 = sum of:
            0.021739064 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
              0.021739064 = score(doc=883,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.19345059 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
  20. Lee, T.; Dupont, S.; Bullard, J.: Comparing the cataloguing of indigenous scholarships : first steps and finding (2021) 0.01
    0.012390645 = product of:
      0.09912516 = sum of:
        0.04956258 = weight(_text_:cataloguing in 582) [ClassicSimilarity], result of:
          0.04956258 = score(doc=582,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.3473634 = fieldWeight in 582, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0390625 = fieldNorm(doc=582)
        0.04956258 = weight(_text_:cataloguing in 582) [ClassicSimilarity], result of:
          0.04956258 = score(doc=582,freq=4.0), product of:
            0.14268221 = queryWeight, product of:
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.032090448 = queryNorm
            0.3473634 = fieldWeight in 582, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.446252 = idf(docFreq=1408, maxDocs=44218)
              0.0390625 = fieldNorm(doc=582)
      0.125 = coord(2/16)
    
    Abstract
    This paper provides an analysis of data collected on the continued prevalence of outdated, marginalizing terms in contemporary cataloguing practices, stemming from the Library of Congress Subject Heading term "Indians" and all its related terms. Using Manitoba Archival Information Network's (MAIN) list of current LCSH and recommended alternatives as a foundation, we built a dataset from titles published in the last five years. We show a wide distribution of LCSH used to catalogue fiction and non-fiction, with outdated but recognized terms like "Indians of North America-History" appearing the most frequently and ambiguous and offensive terms like "Indian gays" appearing throughout the dataset. We discuss two primary problems with the continued use of current LCSH terms: their ambiguity limits the effectiveness of an institution's catalog, and they do not reflect the way Indigenous Peoples, Nations, and communities in North America prefer to represent themselves as individuals and collectives. These findings support those of parallel scholarship on knowl­edge organization practices for works on Indigenous topics and provide a foundation for further work.

Languages

  • e 199
  • d 73
  • pt 3
  • More… Less…

Types

  • a 256
  • el 49
  • m 8
  • p 5
  • x 1
  • More… Less…