Search (58 results, page 1 of 3)

  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Lynch, J.D.; Gibson, J.; Han, M.-J.: Analyzing and normalizing type metadata for a large aggregated digital library (2020) 0.14
    0.14340337 = product of:
      0.19120449 = sum of:
        0.10446788 = weight(_text_:digital in 5720) [ClassicSimilarity], result of:
          0.10446788 = score(doc=5720,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.5283983 = fieldWeight in 5720, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5720)
        0.03790077 = weight(_text_:library in 5720) [ClassicSimilarity], result of:
          0.03790077 = score(doc=5720,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 5720, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5720)
        0.048835836 = product of:
          0.09767167 = sum of:
            0.09767167 = weight(_text_:project in 5720) [ClassicSimilarity], result of:
              0.09767167 = score(doc=5720,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.4616698 = fieldWeight in 5720, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5720)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The Illinois Digital Heritage Hub (IDHH) gathers and enhances metadata from contributing institutions around the state of Illinois and provides this metadata to th Digital Public Library of America (DPLA) for greater access. The IDHH helps contributors shape their metadata to the standards recommended and required by the DPLA in part by analyzing and enhancing aggregated metadata. In late 2018, the IDHH undertook a project to address a particularly problematic field, Type metadata. This paper walks through the project, detailing the process of gathering and analyzing metadata using the DPLA API and OpenRefine, data remediation through XSL transformations in conjunction with local improvements by contributing institutions, and the DPLA ingestion system's quality controls.
  2. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.06
    0.060058743 = product of:
      0.120117486 = sum of:
        0.086163655 = weight(_text_:digital in 5846) [ClassicSimilarity], result of:
          0.086163655 = score(doc=5846,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4358155 = fieldWeight in 5846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.078125 = fieldNorm(doc=5846)
        0.033953834 = product of:
          0.06790767 = sum of:
            0.06790767 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.06790767 = score(doc=5846,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Die u.a. von Bill Gates, Microsoft, Accenture und der Rockefeller Foundation finanzierte "Digital Identity Alliance" will digitale Impfnachweise mit einer globalen biometrischen digitalen Identität verbinden, die auf Lebenszeit besteht.
    Date
    4. 5.2020 17:22:40
  3. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.06
    0.05748579 = product of:
      0.11497158 = sum of:
        0.073112294 = weight(_text_:digital in 39) [ClassicSimilarity], result of:
          0.073112294 = score(doc=39,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.36980176 = fieldWeight in 39, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
        0.041859288 = product of:
          0.083718576 = sum of:
            0.083718576 = weight(_text_:project in 39) [ClassicSimilarity], result of:
              0.083718576 = score(doc=39,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.39571697 = fieldWeight in 39, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
  4. Gomez, J.; Allen, K.; Matney, M.; Awopetu, T.; Shafer, S.: Experimenting with a machine generated annotations pipeline (2020) 0.05
    0.049779683 = product of:
      0.09955937 = sum of:
        0.068930924 = weight(_text_:digital in 657) [ClassicSimilarity], result of:
          0.068930924 = score(doc=657,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.34865242 = fieldWeight in 657, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=657)
        0.030628446 = weight(_text_:library in 657) [ClassicSimilarity], result of:
          0.030628446 = score(doc=657,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.23240642 = fieldWeight in 657, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=657)
      0.5 = coord(2/4)
    
    Abstract
    The UCLA Library reorganized its software developers into focused subteams with one, the Labs Team, dedicated to conducting experiments. In this article we describe our first attempt at conducting a software development experiment, in which we attempted to improve our digital library's search results with metadata from cloud-based image tagging services. We explore the findings and discuss the lessons learned from our first attempt at running an experiment.
  5. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.05
    0.048046995 = product of:
      0.09609399 = sum of:
        0.068930924 = weight(_text_:digital in 318) [ClassicSimilarity], result of:
          0.068930924 = score(doc=318,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.34865242 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.027163066 = product of:
          0.054326132 = sum of:
            0.054326132 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.054326132 = score(doc=318,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In der Session "Knowledge Representation" auf der ISI 2021 wurden unter der Moderation von Jürgen Reischer (Uni Regensburg) drei Projekte vorgestellt, in denen Knowledge Representation mit RDF umgesetzt wird. Die Domänen sind erfreulich unterschiedlich, die gemeinsame Klammer indes ist die Absicht, den Zugang zu Forschungsdaten zu verbessern: - Japanese Visual Media Graph - Taxonomy of Digital Research Activities in the Humanities - Forschungsdaten im konzeptuellen Modell von FRBR
    Date
    22. 5.2021 12:43:05
  6. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.04
    0.042796366 = product of:
      0.08559273 = sum of:
        0.060926907 = weight(_text_:digital in 1084) [ClassicSimilarity], result of:
          0.060926907 = score(doc=1084,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 1084, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 1084) [ClassicSimilarity], result of:
              0.049331643 = score(doc=1084,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 1084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1084)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
  7. Shiri, A.; Kelly, E.J.; Kenfield, A.; Woolcott, L.; Masood, K.; Muglia, C.; Thompson, S.: ¬A faceted conceptualization of digital object reuse in digital repositories (2020) 0.04
    0.03693497 = product of:
      0.14773989 = sum of:
        0.14773989 = weight(_text_:digital in 48) [ClassicSimilarity], result of:
          0.14773989 = score(doc=48,freq=12.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.74726796 = fieldWeight in 48, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, we provide an introduction to the concept of digital object reuse and its various connotations in the context of current digital libraries, archives, and repositories. We will then propose a faceted categorization of the various types, contexts, and cases for digital object reuse in order to facilitate understanding and communication and to provide a conceptual framework for the assessment of digital object reuse by various cultural heritage and cultural memory organizations.
  8. Koster, L.: Persistent identifiers for heritage objects (2020) 0.03
    0.031112304 = product of:
      0.062224608 = sum of:
        0.043081827 = weight(_text_:digital in 5718) [ClassicSimilarity], result of:
          0.043081827 = score(doc=5718,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 5718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5718)
        0.01914278 = weight(_text_:library in 5718) [ClassicSimilarity], result of:
          0.01914278 = score(doc=5718,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 5718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5718)
      0.5 = coord(2/4)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  9. Babcock, K.; Lee, S.; Rajakumar, J.; Wagner, A.: Providing access to digital collections (2020) 0.03
    0.02849595 = product of:
      0.1139838 = sum of:
        0.1139838 = weight(_text_:digital in 5855) [ClassicSimilarity], result of:
          0.1139838 = score(doc=5855,freq=14.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.57652974 = fieldWeight in 5855, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5855)
      0.25 = coord(1/4)
    
    Abstract
    The University of Toronto Libraries is currently reviewing technology to support its Collections U of T service. Collections U of T provides search and browse access to 375 digital collections (and over 203,000 digital objects) at the University of Toronto Libraries. Digital objects typically include special collections material from the university as well as faculty digital collections, all with unique metadata requirements. The service is currently supported by IIIF-enabled Islandora, with one Fedora back end and multiple Drupal sites per parent collection (see attached image). Like many institutions making use of Islandora, UTL is now confronted with Drupal 7 end of life and has begun to investigate a migration path forward. This article will summarise the Collections U of T functional requirements and lessons learned from our current technology stack. It will go on to outline our research to date for alternate solutions. The article will review both emerging micro-service solutions, as well as out-of-the-box platforms, to provide an overview of the digital collection technology landscape in 2019. Note that our research is focused on reviewing technology solutions for providing access to digital collections, as preservation services are offered through other services at the University of Toronto Libraries.
  10. Broughton, V.: Faceted classification in support of diversity : the role of concepts and terms in representing religion (2020) 0.03
    0.02628516 = product of:
      0.05257032 = sum of:
        0.022971334 = weight(_text_:library in 5992) [ClassicSimilarity], result of:
          0.022971334 = score(doc=5992,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 5992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=5992)
        0.029598987 = product of:
          0.059197973 = sum of:
            0.059197973 = weight(_text_:project in 5992) [ClassicSimilarity], result of:
              0.059197973 = score(doc=5992,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.27981415 = fieldWeight in 5992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5992)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The paper examines the development of facet analysis as a methodology and the role it plays in building classifications and other knowledge-organization tools. The use of categorical analysis in areas other than library and information science is also considered. The suitability of the faceted approach for humanities documentation is explored through a critical description of the FATKS (Facet Analytical Theory in Managing Knowledge Structure for Humanities) project carried out at University College London. This research focused on building a conceptual model for the subject of religion together with a relational database and search-and-browse interfaces that would support some degree of automatic classification. The paper concludes with a discussion of the differences between the conceptual model and the vocabulary used to populate it, and how, in the case of religion, the choice of terminology can create an apparent bias in the system.
  11. Trkulja, V.: Klassifikation für interdisziplinäre Forschungsfelder veröffentlicht : You, We & Digital (2021) 0.02
    0.024370763 = product of:
      0.097483054 = sum of:
        0.097483054 = weight(_text_:digital in 5943) [ClassicSimilarity], result of:
          0.097483054 = score(doc=5943,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.493069 = fieldWeight in 5943, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=5943)
      0.25 = coord(1/4)
    
    Object
    You, We & Digital
  12. Franke, T.; Zoubir, M.: Technology for the people? : humanity as a compass for the digital transformation (2020) 0.02
    0.022385975 = product of:
      0.0895439 = sum of:
        0.0895439 = weight(_text_:digital in 830) [ClassicSimilarity], result of:
          0.0895439 = score(doc=830,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4529128 = fieldWeight in 830, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=830)
      0.25 = coord(1/4)
    
    Abstract
    How do we define what technology is for humans? One perspective suggests that it is a tool enabling the use of valuable resources such as time, food, health and mobility. One could say that in its cultural history, humanity has developed a wide range of artefacts which enable the effective utilisation of these resources for the fulfilment of physiological, but also psychological, needs. This paper explores how this perspective may be used as an orientation for future technological innovation. Hence, the goal is to provide an accessible discussion of such a psychological perspective on technology development that could pave the way towards a truly human-centred digital transformation.
    Content
    Vgl.: https://www.wirtschaftsdienst.eu/inhalt/jahr/2020/heft/13/beitrag/technology-for-the-people-humanity-as-a-compass-for-the-digital-transformation.html. DOI: 10.1007/s10273-020-2609-3.
  13. Digital-Index 2019 / 2020 : 86 % der Bürger sind online, die Mehrheit der Über-50-Jährigen ist es auch - Digitale Vorreiter erlangen in Deutschland relative Mehrheit - Gering Gebildeten droht der Ausschluss von gesellschaftlicher Teilhabe (2020) 0.02
    0.021324418 = product of:
      0.085297674 = sum of:
        0.085297674 = weight(_text_:digital in 5752) [ClassicSimilarity], result of:
          0.085297674 = score(doc=5752,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 5752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5752)
      0.25 = coord(1/4)
    
    Abstract
    "Deutschland hat Lust auf Digitalisierung". Die Initiative D 21 stellte gestern im Bundesministerium für Wirtschaft und Energie ihre Studie zum Digital-Index 2019/2020 vor. Zentrale Ergebnisse lauten: 86 Prozent der deutschen Bevölkerung sind online, mobile Endgeräte tragen entscheidend zum Anstieg bei. Der Digitalisierungsgrad steigt auf 58 von 100 Punkten: Digitale Vorreiter stellen erstmals die größte Gruppe, niedrig Gebildete sind in vielen Kompetenzbereichen abgehängt. 36 Prozent finden, dass Schulen ausreichende Digitalisierungsfähigkeiten vermitteln. Die Mehrheit der deutschen Bürger steht Veränderungen durch Digitalisierung positiv gegenüber.
  14. Schwarz, S.: Kompetenzvermittlung digital : how to ... RDA? : Konzeption eines digitalen Lernangebots an der Universitäts- und Stadtbibliothek Köln (2021) 0.02
    0.021324418 = product of:
      0.085297674 = sum of:
        0.085297674 = weight(_text_:digital in 380) [ClassicSimilarity], result of:
          0.085297674 = score(doc=380,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 380, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=380)
      0.25 = coord(1/4)
    
    Abstract
    Die Universitäts- und Stadtbibliothek Köln stellt im Zuge der Coronapandemie und der dadurch beschleunigten Digitalisierungsprozesse ihr universitätsinternes Präsenzschulungsangebot zum Regelwerk RDA auf ein digitales Kursangebot um. Dafür wurden Inhalte, Struktur und Ablauf der bisherigen RDA-Schulungen bedarfsorientiert angepasst, aktualisiert, reorganisiert und anhand mediendidaktischer Standards digital aufbereitet. Der neu entstandene E-Learning-Kurs "How to ... RDA?" bietet ein rein digitales RDA-Lernformat mit dem Fokus auf Flexibilität, Praxisnähe und unterschiedliche Lernbedürfnisse.
  15. Lieb, W.: Willkommen im Überwachungskapitalismus (2021) 0.02
    0.017232731 = product of:
      0.068930924 = sum of:
        0.068930924 = weight(_text_:digital in 323) [ClassicSimilarity], result of:
          0.068930924 = score(doc=323,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.34865242 = fieldWeight in 323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=323)
      0.25 = coord(1/4)
    
    Abstract
    Digital - medial - (a)sozial: Wie Facebook, Twitter, Youtube & Co unsere demokratische Kultur verändern (Teil 2). Die Medienintermediären sind zu wichtigen Verbreitungsplattformen sämtlicher sonstiger Medien- oder Informationsanbieter und damit zu wirkmächtigen Meinungsmultiplikatoren geworden. Sie sind, ähnlich den klassischen Medien, zu virtuellen Redaktionen geworden und damit zu "Gatekeepern" der veröffentlichten Meinung. Ihre Auswahl-Algorithmen entscheiden nicht unwesentlich darüber, welcher Medieninhalt wie viele und welche Nutzer tatsächliche erreicht.
  16. Lieb, W.: Vorsicht vor den asozialen Medien! (2021) 0.02
    0.017232731 = product of:
      0.068930924 = sum of:
        0.068930924 = weight(_text_:digital in 324) [ClassicSimilarity], result of:
          0.068930924 = score(doc=324,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.34865242 = fieldWeight in 324, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=324)
      0.25 = coord(1/4)
    
    Abstract
    Digital - medial - (a)sozial: Wie Facebook, Twitter, Youtube & Co unsere demokratische Kultur verändern (Teil 3). Anfang der zweiten Juniwoche dieses Jahres, ist bei uns das Urheberrechts-Dienstanbieter-Gesetz (UrhDaG) in Kraft getreten, das in ähnliche Richtung geht wie im letzten Teil erwähnte Gesetzgebung in Australien. Nicht mehr die Nutzer haften in Zukunft für Urheberrechtsverletzungen, sondern die Plattformen. Urheber, also Medien, Künstler, Filmproduzenten sollen fair an den Gewinnen der Plattformen beteiligt werden.
  17. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.02
    0.01658462 = product of:
      0.06633848 = sum of:
        0.06633848 = product of:
          0.19901544 = sum of:
            0.19901544 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.19901544 = score(doc=5669,freq=2.0), product of:
                0.42493033 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050121464 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  18. Böker, E.; Brettschneider, P.; Axtmann, A.; Mohammadianbisheh, N.: Kooperation im Forschungsdatenmanagement : Dimensionen der Vernetzung im Forschungsdatenmanagement am Beispiel der baden-württembergischen Landesinitiative bw2FDM (2020) 0.02
    0.015078641 = product of:
      0.060314562 = sum of:
        0.060314562 = weight(_text_:digital in 202) [ClassicSimilarity], result of:
          0.060314562 = score(doc=202,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=202)
      0.25 = coord(1/4)
    
    Abstract
    Forschungsdatenmanagement (FDM) ist für eine digital arbeitende Wissenschaft unerlässlich. Wissenschaft und Infrastruktureinrichtungen haben darauf - unterstützt durch Förderprogramme von Ländern, Bund und EU - mit dem Aufbau neuer Dienste und Strukturen reagiert. Eine zentrale Herausforderung ist es, die verschiedenen Strukturen, Initiativen und Projekte dergestalt aufeinander abzustimmen, dass Doppelarbeiten vermieden und Synergien geschaffen werden. Der Beitrag zeigt an der baden-württembergischen FDM-Landesinitiative bw2FDM beispielhaft auf, wie die Vernetzung verschiedener Akteure am Standort, auf Landes- und auf überregionaler Ebene gelingen kann.
  19. Lieb, W.: Krise der Medien, Krise der Demokratie? (2021) 0.02
    0.015078641 = product of:
      0.060314562 = sum of:
        0.060314562 = weight(_text_:digital in 325) [ClassicSimilarity], result of:
          0.060314562 = score(doc=325,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 325, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=325)
      0.25 = coord(1/4)
    
    Abstract
    Digital - medial - (a)sozial: Wie Facebook, Twitter, Youtube & Co unsere demokratische Kultur verändern (Teil 1). Warum ist das Thema Medien eigentlich so wichtig? Ganz einfach: Weil Medien maßgeblich unser Wissen und unsere Meinung über die Welt beeinflussen und weil der möglichst umfassende Austausch von Informationen und Sichtweisen in den Medien eine Bedingung für einen offenen und demokratischen Meinungsbildungsprozess ist. Der freie Austausch der vielfältigen gesellschaftlichen Meinungen ist wiederum eine Voraussetzung für eine demokratische politische Willensbildung und er verschafft politischen Entscheidungen ihre demokratische Legitimation.
  20. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.01
    0.014923984 = product of:
      0.059695937 = sum of:
        0.059695937 = weight(_text_:digital in 851) [ClassicSimilarity], result of:
          0.059695937 = score(doc=851,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30194187 = fieldWeight in 851, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
      0.25 = coord(1/4)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.

Languages

  • d 34
  • e 24

Types

  • a 48
  • p 3
  • More… Less…