Search (276 results, page 1 of 14)

  • × year_i:[2020 TO 2030}
  1. Simoes, G.; Machado, L.; Gnoli, C.; Souza, R.: Can an ontologically-oriented KO do without concepts? (2020) 0.10
    0.09637429 = product of:
      0.12849905 = sum of:
        0.03442384 = weight(_text_:c in 4964) [ClassicSimilarity], result of:
          0.03442384 = score(doc=4964,freq=2.0), product of:
            0.1505424 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043643 = queryNorm
            0.22866541 = fieldWeight in 4964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=4964)
        0.06369243 = weight(_text_:et in 4964) [ClassicSimilarity], result of:
          0.06369243 = score(doc=4964,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.3110389 = fieldWeight in 4964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4964)
        0.03038278 = product of:
          0.06076556 = sum of:
            0.06076556 = weight(_text_:al in 4964) [ClassicSimilarity], result of:
              0.06076556 = score(doc=4964,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.30380827 = fieldWeight in 4964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4964)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Source
    Knowledge Organization at the Interface. Proceedings of the Sixteenth International ISKO Conference, 2020 Aalborg, Denmark. Ed.: M. Lykke et al
  2. Beck, C.: ¬Die Qualität der Fremddatenanreicherung FRED (2021) 0.10
    0.09637429 = product of:
      0.12849905 = sum of:
        0.03442384 = weight(_text_:c in 377) [ClassicSimilarity], result of:
          0.03442384 = score(doc=377,freq=2.0), product of:
            0.1505424 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043643 = queryNorm
            0.22866541 = fieldWeight in 377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=377)
        0.06369243 = weight(_text_:et in 377) [ClassicSimilarity], result of:
          0.06369243 = score(doc=377,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.3110389 = fieldWeight in 377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=377)
        0.03038278 = product of:
          0.06076556 = sum of:
            0.06076556 = weight(_text_:al in 377) [ClassicSimilarity], result of:
              0.06076556 = score(doc=377,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.30380827 = fieldWeight in 377, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=377)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Das Projekt Fremddatenanreicherung (FRED) der Zentralbibliothek Zürich und der Universitätsbibliotheken Basel und Bern wurde schon verschiedentlich in Präsentationen vorgestellt und in der Literatur (Bucher et al. 2018) behandelt, wobei allerdings nur das Projekt vorgestellt und statistische Werte zur quantitativen Datenanreicherung sowie die Kooperation innerhalb des Projekts, also bei der Implementierung von FRED, dargestellt wurden. Der vorliegende Beitrag versucht weiterführend, die Qualität dieser Fremddatenanreicherung mittels einer subjektiven Beschreibung und Bewertung zu untersuchen. Zudem werden abschließend ein paar Fragen zum weiteren Einsatz von FRED in der völlig veränderten Bibliothekslandschaft der Schweiz mit der Swiss Library Service Platform (SLSP) ab 2021 aufgeworfen. Die Untersuchung erfolgt mittels einer Stichprobe aus Printbüchern für zwei sozialwissenschaftliche Fächer, stellt aber nur eine Art Beobachtung dar, deren Ergebnisse nicht repräsentativ für die Datenanreicherung durch FRED sind. Nicht behandelt wird im Folgenden die zeitweilig in Zürich, Basel und Bern erfolgte Datenanreicherung von E-Books. Auch ist die Qualität der geleisteten intellektuellen Verschlagwortung in den Verbünden, aus denen FRED schöpft, kein Thema. Es geht hier nur, aber immerhin, um die mit FRED erzielten Resultate im intellektuellen Verschlagwortungsumfeld des Frühjahres 2020.
  3. Vakkari, P.; Järvelin, K.; Chang, Y.-W.: ¬The association of disciplinary background with the evolution of topics and methods in Library and Information Science research 1995-2015 (2023) 0.09
    0.08812014 = product of:
      0.17624028 = sum of:
        0.07506225 = weight(_text_:et in 998) [ClassicSimilarity], result of:
          0.07506225 = score(doc=998,freq=4.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.3665629 = fieldWeight in 998, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=998)
        0.10117803 = sum of:
          0.0716129 = weight(_text_:al in 998) [ClassicSimilarity], result of:
            0.0716129 = score(doc=998,freq=4.0), product of:
              0.20001286 = queryWeight, product of:
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.043643 = queryNorm
              0.3580415 = fieldWeight in 998, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.0390625 = fieldNorm(doc=998)
          0.029565124 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
            0.029565124 = score(doc=998,freq=2.0), product of:
              0.15283036 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043643 = queryNorm
              0.19345059 = fieldWeight in 998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=998)
      0.5 = coord(2/4)
    
    Abstract
    The paper reports a longitudinal analysis of the topical and methodological development of Library and Information Science (LIS). Its focus is on the effects of researchers' disciplines on these developments. The study extends an earlier cross-sectional study (Vakkari et al., Journal of the Association for Information Science and Technology, 2022a, 73, 1706-1722) by a coordinated dataset representing a content analysis of articles published in 31 scholarly LIS journals in 1995, 2005, and 2015. It is novel in its coverage of authors' disciplines, topical and methodological aspects in a coordinated dataset spanning two decades thus allowing trend analysis. The findings include a shrinking trend in the share of LIS from 67 to 36% while Computer Science, and Business and Economics increase their share from 9 and 6% to 21 and 16%, respectively. The earlier cross-sectional study (Vakkari et al., Journal of the Association for Information Science and Technology, 2022a, 73, 1706-1722) for the year 2015 identified three topical clusters of LIS research, focusing on topical subfields, methodologies, and contributing disciplines. Correspondence analysis confirms their existence already in 1995 and traces their development through the decades. The contributing disciplines infuse their concepts, research questions, and approaches to LIS and may also subsume vital parts of LIS in their own structures of knowledge production.
    Date
    22. 6.2023 18:15:06
  4. Soos, C.; Leazer, H.H.: Presentations of authorship in knowledge organization (2020) 0.08
    0.08031191 = product of:
      0.107082546 = sum of:
        0.02868653 = weight(_text_:c in 21) [ClassicSimilarity], result of:
          0.02868653 = score(doc=21,freq=2.0), product of:
            0.1505424 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043643 = queryNorm
            0.1905545 = fieldWeight in 21, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=21)
        0.053077027 = weight(_text_:et in 21) [ClassicSimilarity], result of:
          0.053077027 = score(doc=21,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.2591991 = fieldWeight in 21, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=21)
        0.025318984 = product of:
          0.050637968 = sum of:
            0.050637968 = weight(_text_:al in 21) [ClassicSimilarity], result of:
              0.050637968 = score(doc=21,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.25317356 = fieldWeight in 21, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=21)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The "author" is a concept central to many publication and documentation practices, often carrying legal, professional, social, and personal importance. Typically viewed as the solitary owner of their creations, a person is held responsible for their work and positioned to receive the praise and criticism that may emerge in its wake. Although the role of the individual within creative production is undeniable, literary (Foucault 1977; Bloom 1997) and knowledge organization (Moulaison et. al. 2014) theorists have challenged the view that the work of one person can-or should-be fully detached from their professional and personal networks. As these relationships often provide important context and reveal the role of community in the creation of new things, their absence from catalog records presents a falsely simplified view of the creative process. Here, we address the consequences of what we call the "author-asowner" concept and suggest that an "author-as-node" approach, which situates an author within their networks of influence, may allow for more relational representation within knowledge organization systems, a framing that emphasizes rather than erases the messy complexities that affect the production of new objects and ideas.
  5. Neudecker, C.; Zaczynska, K.; Baierer, K.; Rehm, G.; Gerber, M.; Moreno Schneider, J.: Methoden und Metriken zur Messung von OCR-Qualität für die Kuratierung von Daten und Metadaten (2021) 0.08
    0.08031191 = product of:
      0.107082546 = sum of:
        0.02868653 = weight(_text_:c in 369) [ClassicSimilarity], result of:
          0.02868653 = score(doc=369,freq=2.0), product of:
            0.1505424 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043643 = queryNorm
            0.1905545 = fieldWeight in 369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=369)
        0.053077027 = weight(_text_:et in 369) [ClassicSimilarity], result of:
          0.053077027 = score(doc=369,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.2591991 = fieldWeight in 369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=369)
        0.025318984 = product of:
          0.050637968 = sum of:
            0.050637968 = weight(_text_:al in 369) [ClassicSimilarity], result of:
              0.050637968 = score(doc=369,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.25317356 = fieldWeight in 369, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=369)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Durch die systematische Digitalisierung der Bestände in Bibliotheken und Archiven hat die Verfügbarkeit von Bilddigitalisaten historischer Dokumente rasant zugenommen. Das hat zunächst konservatorische Gründe: Digitalisierte Dokumente lassen sich praktisch nach Belieben in hoher Qualität vervielfältigen und sichern. Darüber hinaus lässt sich mit einer digitalisierten Sammlung eine wesentlich höhere Reichweite erzielen, als das mit dem Präsenzbestand allein jemals möglich wäre. Mit der zunehmenden Verfügbarkeit digitaler Bibliotheks- und Archivbestände steigen jedoch auch die Ansprüche an deren Präsentation und Nachnutzbarkeit. Neben der Suche auf Basis bibliothekarischer Metadaten erwarten Nutzer:innen auch, dass sie die Inhalte von Dokumenten durchsuchen können. Im wissenschaftlichen Bereich werden mit maschinellen, quantitativen Analysen von Textmaterial große Erwartungen an neue Möglichkeiten für die Forschung verbunden. Neben der Bilddigitalisierung wird daher immer häufiger auch eine Erfassung des Volltextes gefordert. Diese kann entweder manuell durch Transkription oder automatisiert mit Methoden der Optical Character Recognition (OCR) geschehen (Engl et al. 2020). Der manuellen Erfassung wird im Allgemeinen eine höhere Qualität der Zeichengenauigkeit zugeschrieben. Im Bereich der Massendigitalisierung fällt die Wahl aus Kostengründen jedoch meist auf automatische OCR-Verfahren.
  6. Barité, M,; Rauch, M.: Classification System for Knowledge Organization Literature (CSKOL) : its update, a pending task? (2020) 0.08
    0.07839601 = product of:
      0.15679201 = sum of:
        0.106154054 = weight(_text_:et in 55) [ClassicSimilarity], result of:
          0.106154054 = score(doc=55,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.5183982 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.078125 = fieldNorm(doc=55)
        0.050637968 = product of:
          0.101275936 = sum of:
            0.101275936 = weight(_text_:al in 55) [ClassicSimilarity], result of:
              0.101275936 = score(doc=55,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.5063471 = fieldWeight in 55, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.078125 = fieldNorm(doc=55)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Knowledge Organization at the Interface: Proceedings of the Sixteenth International ISKO Conference, 2020 Aalborg, Denmark. Eds.: M. Lykke et al
  7. Kleineberg, M.: Classifying perspectives : expressing levels of knowing in the Integrative Levels Classification (2020) 0.08
    0.07839601 = product of:
      0.15679201 = sum of:
        0.106154054 = weight(_text_:et in 81) [ClassicSimilarity], result of:
          0.106154054 = score(doc=81,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.5183982 = fieldWeight in 81, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.078125 = fieldNorm(doc=81)
        0.050637968 = product of:
          0.101275936 = sum of:
            0.101275936 = weight(_text_:al in 81) [ClassicSimilarity], result of:
              0.101275936 = score(doc=81,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.5063471 = fieldWeight in 81, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.078125 = fieldNorm(doc=81)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Knowledge Organization at the Interface. Proceedings of the Sixteenth International ISKO Conference, 2020 Aalborg, Denmark. Ed.: M. Lykke et al
  8. Balakrishnan, U,; Soergel, D.; Helfer, O.: Representing concepts through description logic expressions for knowledge organization system (KOS) mapping (2020) 0.08
    0.07839601 = product of:
      0.15679201 = sum of:
        0.106154054 = weight(_text_:et in 144) [ClassicSimilarity], result of:
          0.106154054 = score(doc=144,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.5183982 = fieldWeight in 144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.078125 = fieldNorm(doc=144)
        0.050637968 = product of:
          0.101275936 = sum of:
            0.101275936 = weight(_text_:al in 144) [ClassicSimilarity], result of:
              0.101275936 = score(doc=144,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.5063471 = fieldWeight in 144, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.078125 = fieldNorm(doc=144)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Knowledge Organization at the Interface. Proceedings of the Sixteenth International ISKO Conference, 2020 Aalborg, Denmark. Ed.: M. Lykke et al
  9. Blume, M.; Stalinski, S.: Sitzt Gott im Gehirn? : Neue Erkenntnisse aus der Hirnforschung (2021) 0.06
    0.06271681 = product of:
      0.12543362 = sum of:
        0.084923245 = weight(_text_:et in 398) [ClassicSimilarity], result of:
          0.084923245 = score(doc=398,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.41471857 = fieldWeight in 398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0625 = fieldNorm(doc=398)
        0.040510375 = product of:
          0.08102075 = sum of:
            0.08102075 = weight(_text_:al in 398) [ClassicSimilarity], result of:
              0.08102075 = score(doc=398,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.4050777 = fieldWeight in 398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0625 = fieldNorm(doc=398)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl. den inhaltlichen Bezug zu dem Buch von Newberg et al.
  10. Zeng, M.L.; Sula, C.A.; Gracy, K.F.; Hyvönen, E.; Alves Lima, V.M.: JASIST special issue on digital humanities (DH) : guest editorial (2022) 0.05
    0.054314356 = product of:
      0.10862871 = sum of:
        0.073545694 = weight(_text_:et in 462) [ClassicSimilarity], result of:
          0.073545694 = score(doc=462,freq=6.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.35915685 = fieldWeight in 462, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=462)
        0.03508302 = product of:
          0.07016604 = sum of:
            0.07016604 = weight(_text_:al in 462) [ClassicSimilarity], result of:
              0.07016604 = score(doc=462,freq=6.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.3508076 = fieldWeight in 462, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=462)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    More than 15 years ago, A Companion to Digital Humanities marked out the area of digital humanities (DH) "as a discipline in its own right" (Schreibman et al., 2004, p. xxiii). In the years that followed, there is ample evidence that the DH domain, formed by the intersection of humanities disciplines and digital information technology, has undergone remarkable expansion. This growth is reflected in A New Companion to Digital Humanities (Schreibman et al., 2016). The extensively revised contents of the second edition were contributed by a global team of authors who are pioneers of innovative research in the field. Over this formative period, DH has become a widely recognized, impactful mode of scholarship and an institutional unit for collaborative, transdisciplinary, and computationally engaged research, teaching, and publication (Burdick et al., 2012; Svensson, 2010; Van Ruyskensvelde, 2014). The field of DH has advanced tremendously over the last decade and continues to expand. Meanwhile, competing definitions and approaches of DH scholars continue to spark debate. "Complexity" was a theme of the DH2019 international conference, as it demonstrates the multifaceted connections within DH scholarship today (Alliance of Digital Humanities Organizations, 2019). Yet, while it is often assumed that the DH is in flux and not particularly fixed as an institutional or intellectual construct, there are also obviously touchstones within the DH field, most visibly in the relationship between traditional humanities disciplines and technological infrastructures. Thus, it is still meaningful to "bring together the humanistic and the digital through embracing a non-territorial and liminal zone" (Svensson, 2016, p. 477). This is the focus of this JASIST special issue, which mirrors the increasing attention on DH worldwide.
  11. Libraries, archives and museums as democratic spaces in a digital age (2020) 0.05
    0.047037605 = product of:
      0.09407521 = sum of:
        0.06369243 = weight(_text_:et in 417) [ClassicSimilarity], result of:
          0.06369243 = score(doc=417,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.3110389 = fieldWeight in 417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=417)
        0.03038278 = product of:
          0.06076556 = sum of:
            0.06076556 = weight(_text_:al in 417) [ClassicSimilarity], result of:
              0.06076556 = score(doc=417,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.30380827 = fieldWeight in 417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=417)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Editor
    Ragnar, A. et al.
  12. Koch, C.: Was ist Bewusstsein? (2020) 0.04
    0.043469094 = product of:
      0.08693819 = sum of:
        0.05737306 = weight(_text_:c in 5723) [ClassicSimilarity], result of:
          0.05737306 = score(doc=5723,freq=2.0), product of:
            0.1505424 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043643 = queryNorm
            0.381109 = fieldWeight in 5723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.078125 = fieldNorm(doc=5723)
        0.029565124 = product of:
          0.059130248 = sum of:
            0.059130248 = weight(_text_:22 in 5723) [ClassicSimilarity], result of:
              0.059130248 = score(doc=5723,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.38690117 = fieldWeight in 5723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5723)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    17. 1.2020 22:15:11
  13. Steeg, F.; Pohl, A.: ¬Ein Protokoll für den Datenabgleich im Web am Beispiel von OpenRefine und der Gemeinsamen Normdatei (GND) (2021) 0.04
    0.039198004 = product of:
      0.07839601 = sum of:
        0.053077027 = weight(_text_:et in 367) [ClassicSimilarity], result of:
          0.053077027 = score(doc=367,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.2591991 = fieldWeight in 367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=367)
        0.025318984 = product of:
          0.050637968 = sum of:
            0.050637968 = weight(_text_:al in 367) [ClassicSimilarity], result of:
              0.050637968 = score(doc=367,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.25317356 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Normdaten spielen speziell im Hinblick auf die Qualität der Inhaltserschließung bibliografischer und archivalischer Ressourcen eine wichtige Rolle. Ein konkretes Ziel der Inhaltserschließung ist z. B., dass alle Werke über Hermann Hesse einheitlich zu finden sind. Hier bieten Normdaten eine Lösung, indem z. B. bei der Erschließung einheitlich die GND-Nummer 11855042X für Hermann Hesse verwendet wird. Das Ergebnis ist eine höhere Qualität der Inhaltserschließung vor allem im Sinne von Einheitlichkeit und Eindeutigkeit und, daraus resultierend, eine bessere Auffindbarkeit. Werden solche Entitäten miteinander verknüpft, z. B. Hermann Hesse mit einem seiner Werke, entsteht ein Knowledge Graph, wie ihn etwa Google bei der Inhaltserschließung des Web verwendet (Singhal 2012). Die Entwicklung des Google Knowledge Graph und das hier vorgestellte Protokoll sind historisch miteinander verbunden: OpenRefine wurde ursprünglich als Google Refine entwickelt, und die Funktionalität zum Abgleich mit externen Datenquellen (Reconciliation) wurde ursprünglich zur Einbindung von Freebase entwickelt, einer der Datenquellen des Google Knowledge Graph. Freebase wurde später in Wikidata integriert. Schon Google Refine wurde zum Abgleich mit Normdaten verwendet, etwa den Library of Congress Subject Headings (Hooland et al. 2013).
  14. Schulz, T.: Konzeption und prototypische Entwicklung eines Thesaurus für IT-Konzepte an Hochschulen (2021) 0.04
    0.039198004 = product of:
      0.07839601 = sum of:
        0.053077027 = weight(_text_:et in 429) [ClassicSimilarity], result of:
          0.053077027 = score(doc=429,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.2591991 = fieldWeight in 429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=429)
        0.025318984 = product of:
          0.050637968 = sum of:
            0.050637968 = weight(_text_:al in 429) [ClassicSimilarity], result of:
              0.050637968 = score(doc=429,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.25317356 = fieldWeight in 429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=429)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In Hochschulen besteht derzeit ein großes Anliegen, die Digitalisierung effektiv und effizient für Prozesse in Steuerungsbereichen zu nutzen. Dabei steht die IT-Governance im Mittelpunkt der hochschulpolitischen Überlegungen und beinhaltet "die interne Steuerung und Koordination von Entscheidungsprozessen in Bezug auf IT-Steuerung beziehungsweise Digitalisierungsmaßnahmen."(Christmann-Budian et al. 2018) Strategisch kann die Bündelung von Kompetenzen in der deutschen Hochschullandschaft dabei helfen, die steigenden Anforderungen an die IT-Governance zu erfüllen. Passend zu diesem Ansatz realisieren aktuell im ZDT zusammengeschlossene Hochschulen das Projekt "IT-Konzepte - Portfolio gemeinsamer Vorlagen und Muster". Das Projekt schließt an die Problemstellung an, indem Kompetenzen gebündelt und alle Hochschulen befähigt werden, IT-Konzepte erarbeiten und verabschieden zu können. Dazu wird ein Portfolio an Muster und Vorlagen als Ressourcenpool zusammengetragen und referenziert, um eine Nachvollziehbarkeit der Vielfalt an Herausgebern gewährleisten zu können (Meister 2020). Um den Ressourcenpool, welcher einen Body of Knowledge (BoK) darstellt, effizient durchsuchen zu können, ist eine sinnvolle Struktur unabdinglich. Daher setzt sich das Ziel der Bachelorarbeit aus der Analyse von hochschulinternen Dokumenten mithilfe von Natural Language Processing (NLP) und die daraus resultierende Entwicklung eines Thesaurus-Prototyps für IT-Konzepte zusammen. Dieser soll im Anschluss serialisiert und automatisiert werden, um diesen fortlaufend auf einem aktuellen Stand zu halten. Es wird sich mit der Frage beschäftigt, wie ein Thesaurus nachhaltig technologisch, systematisch und konsistent erstellt werden kann, um diesen Prozess im späteren Verlauf als Grundlage für weitere Themenbereiche einzuführen.
  15. Luhmann, J.; Burghardt, M.: Digital humanities - A discipline in its own right? : an analysis of the role and position of digital humanities in the academic landscape (2022) 0.04
    0.039198004 = product of:
      0.07839601 = sum of:
        0.053077027 = weight(_text_:et in 460) [ClassicSimilarity], result of:
          0.053077027 = score(doc=460,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.2591991 = fieldWeight in 460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=460)
        0.025318984 = product of:
          0.050637968 = sum of:
            0.050637968 = weight(_text_:al in 460) [ClassicSimilarity], result of:
              0.050637968 = score(doc=460,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.25317356 = fieldWeight in 460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=460)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Although digital humanities (DH) has received a lot of attention in recent years, its status as "a discipline in its own right" (Schreibman et al., A companion to digital humanities (pp. xxiii-xxvii). Blackwell; 2004) and its position in the overall academic landscape are still being negotiated. While there are countless essays and opinion pieces that debate the status of DH, little research has been dedicated to exploring the field in a systematic and empirical way (Poole, Journal of Documentation; 2017:73). This study aims to contribute to the existing research gap by comparing articles published over the past three decades in three established English-language DH journals (Computers and the Humanities, Literary and Linguistic Computing, Digital Humanities Quarterly) with research articles from journals in 15 other academic disciplines (corpus size: 34,041 articles; 299 million tokens). As a method of analysis, we use latent Dirichlet allocation topic modeling, combined with recent approaches that aggregate topic models by means of hierarchical agglomerative clustering. Our findings indicate that DH is simultaneously a discipline in its own right and a highly interdisciplinary field, with many connecting factors to neighboring disciplines-first and foremost, computational linguistics, and information science. Detailed descriptive analyses shed some light on the diachronic development of DH and also highlight topics that are characteristic for DH.
  16. Järvelin, K.; Vakkari, P.: LIS research across 50 years: content analysis of journal articles : offering an information-centric conception of memes (2022) 0.04
    0.039198004 = product of:
      0.07839601 = sum of:
        0.053077027 = weight(_text_:et in 949) [ClassicSimilarity], result of:
          0.053077027 = score(doc=949,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.2591991 = fieldWeight in 949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=949)
        0.025318984 = product of:
          0.050637968 = sum of:
            0.050637968 = weight(_text_:al in 949) [ClassicSimilarity], result of:
              0.050637968 = score(doc=949,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.25317356 = fieldWeight in 949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=949)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose This paper analyses the research in Library and Information Science (LIS) and reports on (1) the status of LIS research in 2015 and (2) on the evolution of LIS research longitudinally from 1965 to 2015. Design/methodology/approach The study employs a quantitative intellectual content analysis of articles published in 30+ scholarly LIS journals, following the design by Tuomaala et al. (2014). In the content analysis, we classify articles along eight dimensions covering topical content and methodology. Findings The topical findings indicate that the earlier strong LIS emphasis on L&I services has declined notably, while scientific and professional communication has become the most popular topic. Information storage and retrieval has given up its earlier strong position towards the end of the years analyzed. Individuals are increasingly the units of observation. End-user's and developer's viewpoints have strengthened at the cost of intermediaries' viewpoint. LIS research is methodologically increasingly scattered since survey, scientometric methods, experiment, case studies and qualitative studies have all gained in popularity. Consequently, LIS may have become more versatile in the analysis of its research objects during the years analyzed. Originality/value Among quantitative intellectual content analyses of LIS research, the study is unique in its scope: length of analysis period (50 years), width (8 dimensions covering topical content and methodology) and depth (the annual batch of 30+ scholarly journals).
  17. Wiesenmüller, H.: Verbale Erschließung in Katalogen und Discovery-Systemen : Überlegungen zur Qualität (2021) 0.03
    0.033929795 = product of:
      0.06785959 = sum of:
        0.053077027 = weight(_text_:et in 374) [ClassicSimilarity], result of:
          0.053077027 = score(doc=374,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.2591991 = fieldWeight in 374, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=374)
        0.014782562 = product of:
          0.029565124 = sum of:
            0.029565124 = weight(_text_:22 in 374) [ClassicSimilarity], result of:
              0.029565124 = score(doc=374,freq=2.0), product of:
                0.15283036 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043643 = queryNorm
                0.19345059 = fieldWeight in 374, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=374)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Beschäftigt man sich mit Inhaltserschließung, so sind zunächst zwei Dimensionen zu unterscheiden - die Wissensorganisationssysteme selbst (z. B. Normdateien, Thesauri, Schlagwortsprachen, Klassifikationen und Ontologien) und die Metadaten für Dokumente, die mit diesen Wissensorganisationssystemen erschlossen sind. Beides steht in einer Wechselwirkung zueinander: Die Wissensorganisationssysteme sind die Werkzeuge für die Erschließungsarbeit und bilden die Grundlage für die Erstellung konkreter Erschließungsmetadaten. Die praktische Anwendung der Wissensorganisationssysteme in der Erschließung wiederum ist die Basis für deren Pflege und Weiterentwicklung. Zugleich haben Wissensorganisationssysteme auch einen Eigenwert unabhängig von den Erschließungsmetadaten für einzelne Dokumente, indem sie bestimmte Bereiche von Welt- oder Fachwissen modellartig abbilden. Will man nun Aussagen über die Qualität von inhaltlicher Erschließung treffen, so genügt es nicht, den Input - also die Wissensorganisationssysteme und die damit generierten Metadaten - zu betrachten. Man muss auch den Output betrachten, also das, was die Recherchewerkzeuge daraus machen und was folglich bei den Nutzer:innen konkret ankommt. Im vorliegenden Beitrag werden Überlegungen zur Qualität von Recherchewerkzeugen in diesem Bereich angestellt - gewissermaßen als Fortsetzung und Vertiefung der dazu im Thesenpapier des Expertenteams RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI) gegebenen Hinweise. Im Zentrum steht die verbale Erschließung nach den Regeln für die Schlagwortkatalogisierung (RSWK), wie sie sich in Bibliothekskatalogen manifestiert - gleich, ob es sich dabei um herkömmliche Kataloge oder um Resource-Discovery-Systeme (RDS) handelt.
    Date
    24. 9.2021 12:22:02
  18. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.03
    0.031358406 = product of:
      0.06271681 = sum of:
        0.042461623 = weight(_text_:et in 79) [ClassicSimilarity], result of:
          0.042461623 = score(doc=79,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.20735928 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.020255188 = product of:
          0.040510375 = sum of:
            0.040510375 = weight(_text_:al in 79) [ClassicSimilarity], result of:
              0.040510375 = score(doc=79,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.20253885 = fieldWeight in 79, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Data visualization and knowledge engineering. Eds. J. Hemanth, et al
  19. Ma, J.; Lund, B.: ¬The evolution and shift of research topics and methods in library and information science (2021) 0.03
    0.031358406 = product of:
      0.06271681 = sum of:
        0.042461623 = weight(_text_:et in 357) [ClassicSimilarity], result of:
          0.042461623 = score(doc=357,freq=2.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.20735928 = fieldWeight in 357, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=357)
        0.020255188 = product of:
          0.040510375 = sum of:
            0.040510375 = weight(_text_:al in 357) [ClassicSimilarity], result of:
              0.040510375 = score(doc=357,freq=2.0), product of:
                0.20001286 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.043643 = queryNorm
                0.20253885 = fieldWeight in 357, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=357)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Employing approaches adopted from studies of library and information science (LIS) research trends performed by Järvelin et al., this content analysis systematically examines the evolution and distribution of LIS research topics and data collection methods at 6-year increments from 2006 to 2018. Bibliographic data were collected for 3,422 articles published in LIS journals in the years 2006, 2012, and 2018. While the classification schemes provided in the Järvelin studies do not indicate much change, an analysis of subtopics, data sources, and keywords indicates a substantial impact of social media and data science on the discipline, which emerged at some point between the years of 2012 and 2018. These findings suggest a type of shift in the focus of LIS research, with social media and data science topics playing a role in well over one-third of articles published in 2018, compared with approximately 5% in 2012 and virtually none in 2006. The shift in LIS research foci based on these two technologies/approaches appears similar in extent to those produced by the introduction of information systems in library science in the 1960s, or the Internet in the 1990s, suggesting that these recent advancements are fundamental to the identity of LIS as a discipline.
  20. Expertenteams RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI): Qualitätskriterien und Qualitätssicherung in der inhaltlichen Erschließung : Thesenpapier (2021) 0.03
    0.030024901 = product of:
      0.120099604 = sum of:
        0.120099604 = weight(_text_:et in 364) [ClassicSimilarity], result of:
          0.120099604 = score(doc=364,freq=4.0), product of:
            0.20477319 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.043643 = queryNorm
            0.58650064 = fieldWeight in 364, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0625 = fieldNorm(doc=364)
      0.25 = coord(1/4)
    
    Abstract
    In der Folge eines Arbeitsauftrags des Standardisierungsausschusses wurde im Herbst 2017 das Expertenteam RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI) eingerichtet. Dieses ist der Fachgruppe Erschließung zugeordnet und beschäftigt sich seither mit der Weiterentwicklung der verbalen Inhaltserschließung.

Languages

  • e 200
  • d 76
  • m 1
  • pt 1
  • More… Less…

Types

  • a 255
  • el 45
  • m 12
  • s 4
  • p 3
  • x 2
  • More… Less…