Search (761 results, page 1 of 39)

  • × language_ss:"e"
  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.10
    0.09794241 = product of:
      0.35259268 = sum of:
        0.034793857 = product of:
          0.10438157 = sum of:
            0.10438157 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.10438157 = score(doc=862,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.10438157 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.10438157 = score(doc=862,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.10438157 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.10438157 = score(doc=862,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.10438157 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.10438157 = score(doc=862,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.0046541123 = weight(_text_:in in 862) [ClassicSimilarity], result of:
          0.0046541123 = score(doc=862,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1561842 = fieldWeight in 862, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.2777778 = coord(5/18)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Diedrichs, R.; Goebel, R.: K10plus - Technik und Entwicklung (2020) 0.01
    0.011743865 = product of:
      0.105694786 = sum of:
        0.08477675 = weight(_text_:technik in 5860) [ClassicSimilarity], result of:
          0.08477675 = score(doc=5860,freq=4.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.7776042 = fieldWeight in 5860, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.078125 = fieldNorm(doc=5860)
        0.020918036 = weight(_text_:der in 5860) [ClassicSimilarity], result of:
          0.020918036 = score(doc=5860,freq=6.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.42746788 = fieldWeight in 5860, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=5860)
      0.11111111 = coord(2/18)
    
    Abstract
    Anregungen der Forschung aufgreifend wurde mit K10plus eine Plattform für Metadaten entwickelt, die mehr ist als nur eine Verbunddatenbank. Über die Hälfte der staatlichen Universitäten Deutschlands nutzen K10plus. Der Artikel beleuchtet die Entstehung und die Funktionsweise von K10plus unter technischen Aspekten.
    Source
    ABI-Technik. 40(2020) H.2, S.148-157
  3. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.01
    0.009440067 = product of:
      0.0424803 = sum of:
        0.017983863 = weight(_text_:technik in 405) [ClassicSimilarity], result of:
          0.017983863 = score(doc=405,freq=2.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.16495475 = fieldWeight in 405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.0061568115 = weight(_text_:in in 405) [ClassicSimilarity], result of:
          0.0061568115 = score(doc=405,freq=42.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.20661226 = fieldWeight in 405, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.015371554 = weight(_text_:der in 405) [ClassicSimilarity], result of:
          0.015371554 = score(doc=405,freq=36.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.31412345 = fieldWeight in 405, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.0029680734 = product of:
          0.00890422 = sum of:
            0.00890422 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.00890422 = score(doc=405,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.33333334 = coord(1/3)
      0.22222222 = coord(4/18)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Footnote
    Vgl. auch: Rötzer, F.: Warum schrumpft das Gehirn des Menschen seit ein paar Tausend Jahren? Unter: https://krass-und-konkret.de/wissenschaft-technik/warum-schrumpft-das-gehirn-des-menschen-seit-ein-paar-tausend-jahren/. "... seit einigen tausend Jahren - manche sagen seit 10.000 Jahren -, also nach dem Beginn der Landwirtschaft, der Sesshaftigkeit und der Stadtgründungen sowie der Erfindung der Schrift schrumpfte das menschliche Gehirn überraschenderweise wieder. ... Allgemein wird davon ausgegangen, dass mit den ersten Werkzeugen und vor allem beginnend mit der Erfindung der Schrift kognitive Funktionen, vor allem das Gedächtnis externalisiert wurden, allerdings um den Preis, neue Kapazitäten entwickeln zu müssen, beispielsweise Lesen und Schreiben. Gedächtnis beinhaltet individuelle Erfahrungen, aber auch kollektives Wissen, an dem alle Mitglieder einer Gemeinschaft mitwirken und in das das Wissen sowie die Erfahrungen der Vorfahren eingeschrieben sind. Im digitalen Zeitalter ist die Externalisierung und Entlastung der Gehirne noch sehr viel weitgehender, weil etwa mit KI nicht nur Wissensinhalte, sondern auch kognitive Fähigkeiten wie das Suchen, Sammeln, Analysieren und Auswerten von Informationen zur Entscheidungsfindung externalisiert werden, während die externalisierten Gehirne wie das Internet kollektiv in Echtzeit lernen und sich erweitern. Über Neuimplantate könnten schließlich Menschen direkt an die externalisierten Gehirne angeschlossen werden, aber auch direkt ihre kognitiven Kapazitäten erweitern, indem Prothesen, neue Sensoren oder Maschinen/Roboter auch in der Ferne in den ergänzten Körper der Gehirne aufgenommen werden.
    Die Wissenschaftler sehen diese Entwicklungen im Hintergrund, wollen aber über einen Vergleich mit der Hirnentwicklung bei Ameisen erklären, warum heutige Menschen kleinere Gehirne als ihre Vorfahren vor 100.000 Jahren entwickelt haben. Der Rückgang der Gehirngröße könnte, so die Hypothese, "aus der Externalisierung von Wissen und den Vorteilen der Entscheidungsfindung auf Gruppenebene resultieren, was zum Teil auf das Aufkommen sozialer Systeme der verteilten Kognition und der Speicherung und Weitergabe von Informationen zurückzuführen ist"."
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  4. Scholz, M.: Wie können Daten im Web mit JSON nachgenutzt werden? (2023) 0.01
    0.0071879327 = product of:
      0.064691395 = sum of:
        0.04795697 = weight(_text_:technik in 5345) [ClassicSimilarity], result of:
          0.04795697 = score(doc=5345,freq=2.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.43987936 = fieldWeight in 5345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.0625 = fieldNorm(doc=5345)
        0.016734429 = weight(_text_:der in 5345) [ClassicSimilarity], result of:
          0.016734429 = score(doc=5345,freq=6.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.34197432 = fieldWeight in 5345, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=5345)
      0.11111111 = coord(2/18)
    
    Abstract
    Martin Scholz ist Informatiker an der Universitätsbibliothek Erlangen-Nürnberg. Als Leiter der dortigen Gruppe Digitale Entwicklung und Datenmanagement beschäftigt er sich viel mit Webtechniken und Datentransformation. Er setzt sich mit der aktuellen ABI-Techik-Frage auseinander: Wie können Daten im Web mit JSON nachgenutzt werden?
    Source
    ABI-Technik. 43(2023) H.3, S.224
  5. Dedrick, D.: Colour classification in natural languages (2021) 0.00
    0.003853531 = product of:
      0.023121186 = sum of:
        0.0076788934 = weight(_text_:in in 454) [ClassicSimilarity], result of:
          0.0076788934 = score(doc=454,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.2576908 = fieldWeight in 454, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=454)
        0.008453923 = weight(_text_:der in 454) [ClassicSimilarity], result of:
          0.008453923 = score(doc=454,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.17275909 = fieldWeight in 454, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=454)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 454) [ClassicSimilarity], result of:
              0.020965107 = score(doc=454,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 454, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=454)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    Names for colours or colour-related properties are ubiquitous among natural languages, and this has made linguistic colour classification a topic of interest: are colour classifications in natural languages language-specific, or is there a more general set of principles by which such classificatory terms are organized? This article focuses on a debate between cultural-linguistic, relativistic approaches, and universalistic approaches in this domain of research. It characterizes the central contemporary debates about colour naming, and the main research strategies currently in use, as well as a novel, hybrid strategy.
    Date
    27. 5.2022 18:21:29
    Footnote
    Beitrag in einem Special issue on 'Science and knowledge organization' mit längeren Überblicken zu wichtigen Begriffen der Wissensorgansiation.
    Series
    Reviews of concepts in knowledge organziation
  6. Hertzum, M.: Information seeking by experimentation : trying something out to discover what happens (2023) 0.00
    0.0030724911 = product of:
      0.02765242 = sum of:
        0.003800067 = weight(_text_:in in 915) [ClassicSimilarity], result of:
          0.003800067 = score(doc=915,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.12752387 = fieldWeight in 915, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=915)
        0.023852354 = product of:
          0.03577853 = sum of:
            0.017970093 = weight(_text_:29 in 915) [ClassicSimilarity], result of:
              0.017970093 = score(doc=915,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23319192 = fieldWeight in 915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=915)
            0.01780844 = weight(_text_:22 in 915) [ClassicSimilarity], result of:
              0.01780844 = score(doc=915,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23214069 = fieldWeight in 915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=915)
          0.6666667 = coord(2/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Experimentation is the process of trying something out to discover what happens. It is a widespread information practice, yet often bypassed in information-behavior research. This article argues that experimentation complements prior knowledge, documents, and people as an important fourth class of information sources. Relative to the other classes, the distinguishing characteristics of experimentation are that it is a personal-as opposed to interpersonal-source and that it provides "backtalk." When the information seeker tries something out and then attends to the resulting situation, it is as though the materials of the situation talk back: They provide the information seeker with a situated and direct experience of the consequences of the tried-out options. In this way, experimentation involves obtaining information by creating it. It also involves turning material and behavioral processes into information interactions. Thereby, information seeking by experimentation is important to practical information literacy and extends information-behavior research with new insights on the interrelations between creating and seeking information.
    Date
    21. 3.2023 19:22:29
  7. Barité, M.; Parentelli, V.; Rodríguez Casaballe, N.; Suárez, M.V.: Interdisciplinarity and postgraduate teaching of knowledge organization (KO) : elements for a necessary dialogue (2023) 0.00
    0.0029122676 = product of:
      0.026210409 = sum of:
        0.0063334443 = weight(_text_:in in 1125) [ClassicSimilarity], result of:
          0.0063334443 = score(doc=1125,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21253976 = fieldWeight in 1125, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1125)
        0.019876964 = product of:
          0.029815445 = sum of:
            0.0149750775 = weight(_text_:29 in 1125) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=1125,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 1125, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1125)
            0.014840367 = weight(_text_:22 in 1125) [ClassicSimilarity], result of:
              0.014840367 = score(doc=1125,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19345059 = fieldWeight in 1125, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1125)
          0.6666667 = coord(2/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Interdisciplinarity implies the previous existence of disciplinary fields and not their dissolution. As a general objective, we propose to establish an initial approach to the emphasis given to interdisciplinarity in the teaching of KO, through the teaching staff responsible for postgraduate courses focused on -or related to the KO, in Ibero-American universities. For conducting the research, the framework and distribution of a survey addressed to teachers is proposed, based on four lines of action: 1. The way teachers manage the concept of interdisciplinarity. 2. The place that teachers give to interdisciplinarity in KO. 3. Assessment of interdisciplinary content that teachers incorporate into their postgraduate courses. 4. Set of teaching strategies and resources used by teachers to include interdisciplinarity in the teaching of KO. The study analyzed 22 responses. Preliminary results show that KO teachers recognize the influence of other disciplines in concepts, theories, methods, and applications, but no consensus has been reached regarding which disciplines and authors are the ones who build interdisciplinary bridges. Among other conclusions, the study strongly suggests that environmental and social tensions are reflected in subject representation, especially in the construction of friendly knowl­edge organization systems with interdisciplinary visions, and in the expressions through which information is sought.
    Date
    20.11.2023 17:29:13
  8. Gabler, S.: Thesauri - a Toolbox for Information Retrieval (2023) 0.00
    0.002710001 = product of:
      0.02439001 = sum of:
        0.0050667557 = weight(_text_:in in 114) [ClassicSimilarity], result of:
          0.0050667557 = score(doc=114,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.17003182 = fieldWeight in 114, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
        0.019323254 = weight(_text_:der in 114) [ClassicSimilarity], result of:
          0.019323254 = score(doc=114,freq=8.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.3948779 = fieldWeight in 114, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
      0.11111111 = coord(2/18)
    
    Abstract
    Thesauri sind etablierte Instrumente der bibliothekarischen Sacherschließung. Durch die jüngste technologische Entwicklung und das Aufkommen künstlicher Intelligenz haben sie an Bedeutung gewonnen, da sie in der Lage sind, erklärbare Ergebnisse für die computergestützte Erschließungs- und Konkordanzarbeit mit anderen Datensätzen und Modellen sowie für die Datenvalidierung zu liefern. Ausgehend von bestehenden eigenen Recherchen für eine Masterarbeit wird der Aspekt der Qualitätssicherung in Bibliothekskatalogen anhand ausgewählter Beispiele vertieft.
  9. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.00
    0.002706154 = product of:
      0.024355385 = sum of:
        0.0044784215 = weight(_text_:in in 178) [ClassicSimilarity], result of:
          0.0044784215 = score(doc=178,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.15028831 = fieldWeight in 178, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=178)
        0.019876964 = product of:
          0.029815445 = sum of:
            0.0149750775 = weight(_text_:29 in 178) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=178,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
            0.014840367 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
              0.014840367 = score(doc=178,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19345059 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
          0.6666667 = coord(2/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  10. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.00
    0.0026538838 = product of:
      0.015923303 = sum of:
        0.0026870528 = weight(_text_:in in 1045) [ClassicSimilarity], result of:
          0.0026870528 = score(doc=1045,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.09017298 = fieldWeight in 1045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.00724622 = weight(_text_:der in 1045) [ClassicSimilarity], result of:
          0.00724622 = score(doc=1045,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.14807922 = fieldWeight in 1045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.005990031 = product of:
          0.017970093 = sum of:
            0.017970093 = weight(_text_:29 in 1045) [ClassicSimilarity], result of:
              0.017970093 = score(doc=1045,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23319192 = fieldWeight in 1045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    The initial dimensions extracted by latent semantic analysis (LSA) of a document-term matrixhave been shown to mainly display marginal effects, which are irrelevant for informationretrieval. To improve the performance of LSA, usually the elements of the raw document-term matrix are weighted and the weighting exponent of singular values can be adjusted.An alternative information retrieval technique that ignores the marginal effects is correspon-dence analysis (CA). In this paper, the information retrieval performance of LSA and CA isempirically compared. Moreover, it is explored whether the two weightings also improve theperformance of CA. The results for four empirical datasets show that CA always performsbetter than LSA. Weighting the elements of the raw data matrix can improve CA; however,it is data dependent and the improvement is small. Adjusting the singular value weightingexponent often improves the performance of CA; however, the extent of the improvementdepends on the dataset and the number of dimensions. (PDF) Improving information retrieval through correspondence analysis instead of latent semantic analysis.
    Date
    15. 9.2023 12:28:29
  11. Provost, A. Le; Nicolas, .: IdRef, Paprika and Qualinka : atoolbox for authority data quality and interoperability (2020) 0.00
    0.0023312415 = product of:
      0.041962348 = sum of:
        0.041962348 = weight(_text_:technik in 1076) [ClassicSimilarity], result of:
          0.041962348 = score(doc=1076,freq=2.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.38489443 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1076)
      0.055555556 = coord(1/18)
    
    Source
    ABI-Technik. 40(2020) H.2, S.158-168
  12. Petrovich, E.: Science mapping and science maps (2021) 0.00
    0.0023151538 = product of:
      0.013890923 = sum of:
        0.0050667557 = weight(_text_:in in 595) [ClassicSimilarity], result of:
          0.0050667557 = score(doc=595,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.17003182 = fieldWeight in 595, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=595)
        0.0048308135 = weight(_text_:der in 595) [ClassicSimilarity], result of:
          0.0048308135 = score(doc=595,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.09871948 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=595)
        0.003993354 = product of:
          0.011980061 = sum of:
            0.011980061 = weight(_text_:29 in 595) [ClassicSimilarity], result of:
              0.011980061 = score(doc=595,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.15546128 = fieldWeight in 595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=595)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    Science maps are visual representations of the structure and dynamics of scholarly knowl­edge. They aim to show how fields, disciplines, journals, scientists, publications, and scientific terms relate to each other. Science mapping is the body of methods and techniques that have been developed for generating science maps. This entry is an introduction to science maps and science mapping. It focuses on the conceptual, theoretical, and methodological issues of science mapping, rather than on the mathematical formulation of science mapping techniques. After a brief history of science mapping, we describe the general procedure for building a science map, presenting the data sources and the methods to select, clean, and pre-process the data. Next, we examine in detail how the most common types of science maps, namely the citation-based and the term-based, are generated. Both are based on networks: the former on the network of publications connected by citations, the latter on the network of terms co-occurring in publications. We review the rationale behind these mapping approaches, as well as the techniques and methods to build the maps (from the extraction of the network to the visualization and enrichment of the map). We also present less-common types of science maps, including co-authorship networks, interlocking editorship networks, maps based on patents' data, and geographic maps of science. Moreover, we consider how time can be represented in science maps to investigate the dynamics of science. We also discuss some epistemological and sociological topics that can help in the interpretation, contextualization, and assessment of science maps. Then, we present some possible applications of science maps in science policy. In the conclusion, we point out why science mapping may be interesting for all the branches of meta-science, from knowl­edge organization to epistemology.
    Date
    27. 5.2022 18:19:29
    Footnote
    Beitrag in einem Special issue on 'Science and knowledge organization' mit längeren Überblicken zu wichtigen Begriffen der Wissensorgansiation.
    Series
    Reviews of concepts in knowledge organziation
  13. Furner, J.: Classification of the sciences in Greco-Roman Antiquity (2021) 0.00
    0.0022499785 = product of:
      0.020249806 = sum of:
        0.008294153 = weight(_text_:in in 583) [ClassicSimilarity], result of:
          0.008294153 = score(doc=583,freq=14.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.27833787 = fieldWeight in 583, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=583)
        0.011955653 = weight(_text_:der in 583) [ClassicSimilarity], result of:
          0.011955653 = score(doc=583,freq=4.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.24431825 = fieldWeight in 583, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=583)
      0.11111111 = coord(2/18)
    
    Abstract
    A review is undertaken of the contributions of 38 classical authors, from Pythagoras in the 6th century BCE to Isidore in the 6th century CE, to the classification of the sciences. Such classifications include some that are more theoretical in function, some that are more practical (e.g., encyclopedic, bibliographic, or curricular). The emergence of the quadrivium and trivium is charted; the Greek concept of "enkýklios paideía" and the Latin term "artes liberales" are defined; and the ways in which the form, content, and function of science classifications change during this period are assessed.
    Footnote
    Beitrag in einem Special issue on 'Science and knowledge organization' mit längeren Überblicken zu wichtigen Begriffen der Wissensorgansiation.
    Series
    Reviews of concepts in knowledge organziation
    Theme
    Geschichte der Klassifikationssysteme
  14. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.00
    0.0021783223 = product of:
      0.0196049 = sum of:
        0.0059412974 = weight(_text_:in in 250) [ClassicSimilarity], result of:
          0.0059412974 = score(doc=250,freq=22.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.19937998 = fieldWeight in 250, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.013663604 = weight(_text_:der in 250) [ClassicSimilarity], result of:
          0.013663604 = score(doc=250,freq=16.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.27922085 = fieldWeight in 250, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
      0.11111111 = coord(2/18)
    
    Abstract
    Es handelt sich um eine bibliometrische Untersuchung der Entwicklung der Open-Access-Verfügbarkeit wissenschaftlicher Zeitschriftenartikel in Deutschland, die im Zeitraum 2010-18 erschienen und im Web of Science indexiert sind. Ein besonderes Augenmerk der Analyse lag auf der Frage, ob und inwiefern sich die Open-Access-Profile der Universitäten und außeruniversitären Wissenschaftseinrichtungen in Deutschland voneinander unterscheiden.
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
    Footnote
    Den Aufsatz begleitet ein interaktives Datensupplement, mit dem sich die OA-Anteile auf Ebene der Einrichtung vergleichen lassen. https://subugoe.github.io/oauni/articles/supplement.html. Die Arbeit entstand in Zusammenarbeit der BMBF-Projekte OAUNI und OASE der Förderlinie "Quantitative Wissenschaftsforschung". https://www.wihoforschung.de/de/quantitative-wissenschaftsforschung-1573.php.
  15. Patriarca, S.: Information literacy gives us the tools to check sources and to verify factual statements : What does Popper`s "Es gibt keine Autoritäten" mean? (2021) 0.00
    0.0019085167 = product of:
      0.01717665 = sum of:
        0.0067176316 = weight(_text_:in in 331) [ClassicSimilarity], result of:
          0.0067176316 = score(doc=331,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22543246 = fieldWeight in 331, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
        0.010459018 = weight(_text_:der in 331) [ClassicSimilarity], result of:
          0.010459018 = score(doc=331,freq=6.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.21373394 = fieldWeight in 331, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
      0.11111111 = coord(2/18)
    
    Abstract
    I wonder if you would consider an English perspective on the exchange between Bernd Jörs and Hermann Huemer. In my career in the independent education sector I can recall many discussions and Government reports about cross-curricular issues such as logical reasoning and critical thinking, In the IB system this led to the inclusion in the Diploma of "Theory of Knowledge." In the UK we had "key skills" and "critical thinking." One such key skill is what we now call "information literacy." "In his parody of Information literacy, Dr Jörs seems to have confused a necessary condition for a sufficient condition. The fact that information competence may be necessary for serious academic study does not of course make it sufficient. When that is understood the joke about the megalomaniac rather loses its force. (We had better pass over the rant which follows, the sneer at "earth sciences" and the German prejudice towards Austrians)."
    Content
    Zu: Bernd Jörs, Zukunft der Informationswissenschaft und Kritischer Rationalismus - Gegen die Selbstüberschätzung der Vertreter der "Informationskompetenz" eine Rückkehr zu Karl R. Popper geboten, in: Open Password, 30 August - Herbert Huemer, Informationskompetenz als Kompetenz für lebenslanges Lernen, in: Open Password, #965, 25. August 2021 - Huemer nahm auf den Beitrag von Bernd Jörs "Wie sich "Informationskompetenz" methodisch-operativ untersuchen lässt" in Open Password am 20. August 2021 Bezug.
  16. Shieh, J.: PCC's work on URIs in MARC (2020) 0.00
    0.0018625094 = product of:
      0.016762584 = sum of:
        0.008775878 = weight(_text_:in in 122) [ClassicSimilarity], result of:
          0.008775878 = score(doc=122,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.29450375 = fieldWeight in 122, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=122)
        0.007986708 = product of:
          0.023960123 = sum of:
            0.023960123 = weight(_text_:29 in 122) [ClassicSimilarity], result of:
              0.023960123 = score(doc=122,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.31092256 = fieldWeight in 122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=122)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    In 2015, the PCC Task Group on URIs in MARC was tasked to identify and address linked data identifiers deployment in the current MARC format. By way of a pilot test, a survey, MARC Discussion papers, Proposals, etc., the Task Group initiated and introduced changes to MARC encoding. The Task Group succeeded in laying the ground work for preparing library data transition from MARC data to a linked data, RDF environment.
    Date
    2. 2.2021 18:29:15
    Footnote
    Beitrag in einem Themenheft: 'Program for Cooperative Cataloging (PCC): 25 Years Strong and Growing!'.
  17. Skare, R.: Paratext (2020) 0.00
    0.0017775502 = product of:
      0.015997952 = sum of:
        0.008011244 = weight(_text_:in in 20) [ClassicSimilarity], result of:
          0.008011244 = score(doc=20,freq=10.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.26884392 = fieldWeight in 20, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=20)
        0.007986708 = product of:
          0.023960123 = sum of:
            0.023960123 = weight(_text_:29 in 20) [ClassicSimilarity], result of:
              0.023960123 = score(doc=20,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.31092256 = fieldWeight in 20, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=20)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    This article presents Gérard Genette's concept of the paratext by defining the term and by describing its characteristics. The use of the concept in disciplines other than literary studies and for media other than printed books is discussed. The last section shows the relevance of the concept for library and information science in general and for knowledge organization, in which paratext in particular is connected to the concept "metadata."
    Date
    31.10.2020 18:51:29
    Series
    Reviews of concepts in KO
  18. Hudon, M.: ¬The status of knowledge organization in library and information science master's programs (2021) 0.00
    0.0017616879 = product of:
      0.015855191 = sum of:
        0.008866822 = weight(_text_:in in 697) [ClassicSimilarity], result of:
          0.008866822 = score(doc=697,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.29755569 = fieldWeight in 697, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=697)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 697) [ClassicSimilarity], result of:
              0.020965107 = score(doc=697,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=697)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    The content of master's programs accredited by the American Library Association was examined to assess the status of knowledge organization (KO) as a subject in current training. Data collected show that KO remains very visible in a majority of programs, mainly in the form of required and electives courses focusing on descriptive cataloging, classification, and metadata. Observed tendencies include, however, the recent elimination of the required KO course in several programs, the reality that one third of KO electives listed in course catalogs have not been scheduled in the past three years, and the fact that two-thirds of those teaching KO specialize in other areas of information science.
    Date
    27. 9.2022 18:46:29
  19. Thomas, S.E.: ¬The Program for Cooperative Cataloging : backstory and future potential (2020) 0.00
    0.0017181957 = product of:
      0.015463762 = sum of:
        0.0070098387 = weight(_text_:in in 124) [ClassicSimilarity], result of:
          0.0070098387 = score(doc=124,freq=10.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.23523843 = fieldWeight in 124, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=124)
        0.008453923 = weight(_text_:der in 124) [ClassicSimilarity], result of:
          0.008453923 = score(doc=124,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.17275909 = fieldWeight in 124, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=124)
      0.11111111 = coord(2/18)
    
    Abstract
    In 1988 the Library of Congress and eight library participants undertook a two-year pilot known as the National Coordinated Cataloging Program (NCCP) to increase the number of quality bibliographic records. Subsequently the Bibliographic Services Study Committee reviewed the pilot. Discussions held at the Library of Congress (LC) and in other fora resulted in the creation of the Cooperative Cataloging Council, and, ultimately, the establishment of the Program for Cooperative Cataloging (PCC) in 1994. The conditions that contributed to a successful approach to shared cataloging are described. The article concludes with considerations for expanding the future effectiveness of the PCC.
    Footnote
    Beitrag in einem Themenheft: 'Program for Cooperative Cataloging (PCC): 25 Years Strong and Growing!'.
    Theme
    Geschichte der Kataloge
  20. Miksa, S.D.: Cataloging principles and objectives : history and development (2021) 0.00
    0.0016495949 = product of:
      0.014846354 = sum of:
        0.007600134 = weight(_text_:in in 702) [ClassicSimilarity], result of:
          0.007600134 = score(doc=702,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.25504774 = fieldWeight in 702, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=702)
        0.00724622 = weight(_text_:der in 702) [ClassicSimilarity], result of:
          0.00724622 = score(doc=702,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.14807922 = fieldWeight in 702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=702)
      0.11111111 = coord(2/18)
    
    Abstract
    Cataloging principles and objectives guide the formation of cataloging rules governing the organization of information within the library catalog, as well as the function of the catalog itself. Changes in technologies wrought by the internet and the web have been the driving forces behind shifting cataloging practice and reconfigurations of cataloging rules. Modern cataloging principles and objectives started in 1841 with the creation of Panizzi's 91 Rules for the British Museum and gained momentum with Charles Cutter's Rules for Descriptive Cataloging (1904). The first Statement of International Cataloguing Principles (ICP) was adopted in 1961, holding their place through such codifications as AACR and AACR2 in the 1970s and 1980s. Revisions accelerated starting in 2003 with the three original FR models. The Library Reference Model (LRM) in 2017 acted as a catalyst for the evolution of principles and objectives culminating in the creation of Resource Description and Access (RDA) in 2013.
    Theme
    Geschichte der Kataloge

Types

  • el 42
  • p 2
  • More… Less…