Search (125 results, page 1 of 7)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.14
    0.13764066 = sum of:
      0.077960454 = product of:
        0.23388135 = sum of:
          0.23388135 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
            0.23388135 = score(doc=1000,freq=2.0), product of:
              0.49937475 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.058902346 = queryNorm
              0.46834838 = fieldWeight in 1000, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1000)
        0.33333334 = coord(1/3)
      0.059680205 = product of:
        0.11936041 = sum of:
          0.11936041 = weight(_text_:dokumente in 1000) [ClassicSimilarity], result of:
            0.11936041 = score(doc=1000,freq=4.0), product of:
              0.2999863 = queryWeight, product of:
                5.092943 = idf(docFreq=737, maxDocs=44218)
                0.058902346 = queryNorm
              0.3978862 = fieldWeight in 1000, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.092943 = idf(docFreq=737, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1000)
        0.5 = coord(1/2)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Wiesenmüller, H.: Verbale Erschließung in Katalogen und Discovery-Systemen : Überlegungen zur Qualität (2021) 0.08
    0.07963134 = product of:
      0.15926269 = sum of:
        0.15926269 = sum of:
          0.11936041 = weight(_text_:dokumente in 374) [ClassicSimilarity], result of:
            0.11936041 = score(doc=374,freq=4.0), product of:
              0.2999863 = queryWeight, product of:
                5.092943 = idf(docFreq=737, maxDocs=44218)
                0.058902346 = queryNorm
              0.3978862 = fieldWeight in 374, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.092943 = idf(docFreq=737, maxDocs=44218)
                0.0390625 = fieldNorm(doc=374)
          0.039902277 = weight(_text_:22 in 374) [ClassicSimilarity], result of:
            0.039902277 = score(doc=374,freq=2.0), product of:
              0.20626599 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058902346 = queryNorm
              0.19345059 = fieldWeight in 374, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=374)
      0.5 = coord(1/2)
    
    Abstract
    Beschäftigt man sich mit Inhaltserschließung, so sind zunächst zwei Dimensionen zu unterscheiden - die Wissensorganisationssysteme selbst (z. B. Normdateien, Thesauri, Schlagwortsprachen, Klassifikationen und Ontologien) und die Metadaten für Dokumente, die mit diesen Wissensorganisationssystemen erschlossen sind. Beides steht in einer Wechselwirkung zueinander: Die Wissensorganisationssysteme sind die Werkzeuge für die Erschließungsarbeit und bilden die Grundlage für die Erstellung konkreter Erschließungsmetadaten. Die praktische Anwendung der Wissensorganisationssysteme in der Erschließung wiederum ist die Basis für deren Pflege und Weiterentwicklung. Zugleich haben Wissensorganisationssysteme auch einen Eigenwert unabhängig von den Erschließungsmetadaten für einzelne Dokumente, indem sie bestimmte Bereiche von Welt- oder Fachwissen modellartig abbilden. Will man nun Aussagen über die Qualität von inhaltlicher Erschließung treffen, so genügt es nicht, den Input - also die Wissensorganisationssysteme und die damit generierten Metadaten - zu betrachten. Man muss auch den Output betrachten, also das, was die Recherchewerkzeuge daraus machen und was folglich bei den Nutzer:innen konkret ankommt. Im vorliegenden Beitrag werden Überlegungen zur Qualität von Recherchewerkzeugen in diesem Bereich angestellt - gewissermaßen als Fortsetzung und Vertiefung der dazu im Thesenpapier des Expertenteams RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI) gegebenen Hinweise. Im Zentrum steht die verbale Erschließung nach den Regeln für die Schlagwortkatalogisierung (RSWK), wie sie sich in Bibliothekskatalogen manifestiert - gleich, ob es sich dabei um herkömmliche Kataloge oder um Resource-Discovery-Systeme (RDS) handelt.
    Date
    24. 9.2021 12:22:02
  3. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.06
    0.06478201 = sum of:
      0.044830866 = product of:
        0.13449259 = sum of:
          0.13449259 = weight(_text_:themes in 178) [ClassicSimilarity], result of:
            0.13449259 = score(doc=178,freq=2.0), product of:
              0.37868488 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.058902346 = queryNorm
              0.35515702 = fieldWeight in 178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0390625 = fieldNorm(doc=178)
        0.33333334 = coord(1/3)
      0.019951139 = product of:
        0.039902277 = sum of:
          0.039902277 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
            0.039902277 = score(doc=178,freq=2.0), product of:
              0.20626599 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058902346 = queryNorm
              0.19345059 = fieldWeight in 178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=178)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  4. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.06
    0.06478201 = sum of:
      0.044830866 = product of:
        0.13449259 = sum of:
          0.13449259 = weight(_text_:themes in 950) [ClassicSimilarity], result of:
            0.13449259 = score(doc=950,freq=2.0), product of:
              0.37868488 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.058902346 = queryNorm
              0.35515702 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
        0.33333334 = coord(1/3)
      0.019951139 = product of:
        0.039902277 = sum of:
          0.039902277 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
            0.039902277 = score(doc=950,freq=2.0), product of:
              0.20626599 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058902346 = queryNorm
              0.19345059 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
        0.5 = coord(1/2)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  5. Graf, K.: DNB, die "schlechteste Nationalbibliothek der Galaxis" (Graf), laesst einmal mehr URN-Links ins Leere laufen (2023) 0.05
    0.050640333 = product of:
      0.10128067 = sum of:
        0.10128067 = product of:
          0.20256133 = sum of:
            0.20256133 = weight(_text_:dokumente in 978) [ClassicSimilarity], result of:
              0.20256133 = score(doc=978,freq=2.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.6752353 = fieldWeight in 978, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.09375 = fieldNorm(doc=978)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Elektronische Dokumente
  6. Giesselbach, S.; Estler-Ziegler, T.: Dokumente schneller analysieren mit Künstlicher Intelligenz (2021) 0.05
    0.047181346 = product of:
      0.09436269 = sum of:
        0.09436269 = product of:
          0.18872538 = sum of:
            0.18872538 = weight(_text_:dokumente in 128) [ClassicSimilarity], result of:
              0.18872538 = score(doc=128,freq=10.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.6291133 = fieldWeight in 128, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=128)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Künstliche Intelligenz (KI) und natürliches Sprachverstehen (natural language understanding/NLU) verändern viele Aspekte unseres Alltags und unserer Arbeitsweise. Besondere Prominenz erlangte NLU durch Sprachassistenten wie Siri, Alexa und Google Now. NLU bietet Firmen und Einrichtungen das Potential, Prozesse effizienter zu gestalten und Mehrwert aus textuellen Inhalten zu schöpfen. So sind NLU-Lösungen in der Lage, komplexe, unstrukturierte Dokumente inhaltlich zu erschließen. Für die semantische Textanalyse hat das NLU-Team des IAIS Sprachmodelle entwickelt, die mit Deep-Learning-Verfahren trainiert werden. Die NLU-Suite analysiert Dokumente, extrahiert Eckdaten und erstellt bei Bedarf sogar eine strukturierte Zusammenfassung. Mit diesen Ergebnissen, aber auch über den Inhalt der Dokumente selbst, lassen sich Dokumente vergleichen oder Texte mit ähnlichen Informationen finden. KI-basierten Sprachmodelle sind der klassischen Verschlagwortung deutlich überlegen. Denn sie finden nicht nur Texte mit vordefinierten Schlagwörtern, sondern suchen intelligent nach Begriffen, die in ähnlichem Zusammenhang auftauchen oder als Synonym gebraucht werden. Der Vortrag liefert eine Einordnung der Begriffe "Künstliche Intelligenz" und "Natural Language Understanding" und zeigt Möglichkeiten, Grenzen, aktuelle Forschungsrichtungen und Methoden auf. Anhand von Praxisbeispielen wird anschließend demonstriert, wie NLU zur automatisierten Belegverarbeitung, zur Katalogisierung von großen Datenbeständen wie Nachrichten und Patenten und zur automatisierten thematischen Gruppierung von Social Media Beiträgen und Publikationen genutzt werden kann.
  7. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.046776272 = product of:
      0.093552545 = sum of:
        0.093552545 = product of:
          0.28065762 = sum of:
            0.28065762 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.28065762 = score(doc=862,freq=2.0), product of:
                0.49937475 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.058902346 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  8. Sack, H.: Hybride Künstliche Intelligenz in der automatisierten Inhaltserschließung (2021) 0.04
    0.043855816 = product of:
      0.08771163 = sum of:
        0.08771163 = product of:
          0.17542326 = sum of:
            0.17542326 = weight(_text_:dokumente in 372) [ClassicSimilarity], result of:
              0.17542326 = score(doc=372,freq=6.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.5847709 = fieldWeight in 372, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.046875 = fieldNorm(doc=372)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Effizienter (Online-)Zugang zu Bibliotheks- und Archivmaterialien erfordert eine qualitativ hinreichende inhaltliche Erschließung dieser Dokumente. Die passgenaue Verschlagwortung und Kategorisierung dieser unstrukturierten Dokumente ermöglichen einen strukturell gegliederten Zugang sowohl in der analogen als auch in der digitalen Welt. Darüber hinaus erweitert eine vollständige Transkription der Dokumente den Zugang über die Möglichkeiten der Volltextsuche. Angesichts der in jüngster Zeit erzielten spektakulären Erfolge der Künstlichen Intelligenz liegt die Schlussfolgerung nahe, dass auch das Problem der automatisierten Inhaltserschließung für Bibliotheken und Archive als mehr oder weniger gelöst anzusehen wäre. Allerdings lassen sich die oftmals nur in thematisch engen Teilbereichen erzielten Erfolge nicht immer problemlos verallgemeinern oder in einen neuen Kontext übertragen. Das Ziel der vorliegenden Darstellung liegt in der Diskussion des aktuellen Stands der Technik der automatisierten inhaltlichen Erschließung anhand ausgewählter Beispiele sowie möglicher Fortschritte und Prognosen basierend auf aktuellen Entwicklungen des maschinellen Lernens und der Künstlichen Intelligenz einschließlich deren Kritik.
  9. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.04
    0.038980227 = product of:
      0.077960454 = sum of:
        0.077960454 = product of:
          0.23388135 = sum of:
            0.23388135 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.23388135 = score(doc=5669,freq=2.0), product of:
                0.49937475 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.058902346 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  10. Koya, K.; Chowdhury, G.: Cultural heritage information practices and iSchools education for achieving sustainable development (2020) 0.04
    0.03882467 = product of:
      0.07764934 = sum of:
        0.07764934 = product of:
          0.23294802 = sum of:
            0.23294802 = weight(_text_:themes in 5877) [ClassicSimilarity], result of:
              0.23294802 = score(doc=5877,freq=6.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.61515003 = fieldWeight in 5877, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5877)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Since 2015, the United Nations Educational, Scientific and Cultural Organization (UNESCO) began the process of inculcating culture as part of the United Nations' (UN) post-2015 Sustainable (former Millennium) Development Goals, which member countries agreed to achieve by 2030. By conducting a thematic analysis of the 25 UN commissioned reports and policy documents, this research identifies 14 broad cultural heritage information themes that need to be practiced in order to achieve cultural sustainability, of which information platforms, information sharing, information broadcast, information quality, information usage training, information access, information collection, and contribution appear to be the significant themes. An investigation of education on cultural heritage informatics and digital humanities at iSchools (www.ischools.org) using a gap analysis framework demonstrates the core information science skills required for cultural heritage education. The research demonstrates that: (i) a thematic analysis of cultural heritage policy documents can be used to explore the key themes for cultural informatics education and research that can lead to sustainable development; and (ii) cultural heritage information education should cover a series of skills that can be categorized in five key areas, viz., information, technology, leadership, application, and people and user skills.
  11. Almeida, P. de; Gnoli, C.: Fiction in a phenomenon-based classification (2021) 0.04
    0.03804025 = product of:
      0.0760805 = sum of:
        0.0760805 = product of:
          0.2282415 = sum of:
            0.2282415 = weight(_text_:themes in 712) [ClassicSimilarity], result of:
              0.2282415 = score(doc=712,freq=4.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.60272145 = fieldWeight in 712, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.046875 = fieldNorm(doc=712)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In traditional classification, fictional works are indexed only by their form, genre, and language, while their subject content is believed to be irrelevant. However, recent research suggests that this may not be the best approach. We tested indexing of a small sample of selected fictional works by Integrative Levels Classification (ILC2), a freely faceted system based on phenomena instead of disciplines and considered the structure of the resulting classmarks. Issues in the process of subject analysis, such as selection of relevant vs. non-relevant themes and citation order of relevant ones, are identified and discussed. Some phenomena that are covered in scholarly literature can also be identified as relevant themes in fictional literature and expressed in classmarks. This can allow for hybrid search and retrieval systems covering both fiction and nonfiction, which will result in better leveraging of the knowledge contained in fictional works.
  12. Oliver, C.: Leveraging KOS to extend our reach with automated processes (2021) 0.04
    0.035864696 = product of:
      0.07172939 = sum of:
        0.07172939 = product of:
          0.21518816 = sum of:
            0.21518816 = weight(_text_:themes in 722) [ClassicSimilarity], result of:
              0.21518816 = score(doc=722,freq=2.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.56825125 = fieldWeight in 722, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0625 = fieldNorm(doc=722)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This article provides a conclusion to the special issue on Artificial Intelligence (AI) and Automated Processes for Subject Access. The authors who contributed to this special issue have provoked interesting questions as well as bringing attention to important issues. This concluding article looks at common themes and highlights some of the questions raised.
  13. Mering, M.: Implementation of faceted vocabularies : an introduction (2023) 0.04
    0.035864696 = product of:
      0.07172939 = sum of:
        0.07172939 = product of:
          0.21518816 = sum of:
            0.21518816 = weight(_text_:themes in 1162) [ClassicSimilarity], result of:
              0.21518816 = score(doc=1162,freq=2.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.56825125 = fieldWeight in 1162, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1162)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This special issue on the "Implementation of Faceted Vocabularies: An Introduction" focuses on strategies and methods for implementing faceted vocabularies in MARC and non-MARC environments in library related settings. The following introduction provides a brief description of each article in the issue. The articles are grouped around three themes: Introduction to Faceted Vocabularies, Faceted Application of Subject Terminology (FAST), and Genre Terms and Other Faceted Vocabularies.
  14. Neudecker, C.: Zur Kuratierung digitalisierter Dokumente mit Künstlicher Intelligenz : das Qurator-Projekt (2020) 0.04
    0.035808124 = product of:
      0.07161625 = sum of:
        0.07161625 = product of:
          0.1432325 = sum of:
            0.1432325 = weight(_text_:dokumente in 47) [ClassicSimilarity], result of:
              0.1432325 = score(doc=47,freq=4.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.47746342 = fieldWeight in 47, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.046875 = fieldNorm(doc=47)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Elektronische Dokumente
  15. Haggar, E.: Fighting fake news : exploring George Orwell's relationship to information literacy (2020) 0.03
    0.03170021 = product of:
      0.06340042 = sum of:
        0.06340042 = product of:
          0.19020125 = sum of:
            0.19020125 = weight(_text_:themes in 5978) [ClassicSimilarity], result of:
              0.19020125 = score(doc=5978,freq=4.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.5022679 = fieldWeight in 5978, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5978)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this paper is to analyse George Orwell's diaries through an information literacy lens. Orwell is well known for his dedication to freedom of speech and objective truth, and his novel Nineteen Eighty-Four is often used as a lens through which to view the fake news phenomenon. This paper will examine Orwell's diaries in relation to UNESCO's Five Laws of Media and Information Literacy to examine how information literacy concepts can be traced in historical documents. Design/methodology/approach This paper will use a content analysis method to explore Orwell's relationship to information literacy. Two of Orwell's political diaries from the period 1940-42 were coded for key themes related to the ways in which Orwell discusses and evaluates information and news. These themes were then compared to UNESCO Five Laws of Media and Information Literacy. Textual analysis software NVivo 12 was used to perform keyword searches and word frequency queries in the digitised diaries. Findings The findings show that while Orwell's diaries and the Five Laws did not share terminology, they did share ideas on bias and access to information. They also extend the history of information literacy research and practice by illustrating how concerns about the need to evaluate information sources are represented within historical literature. Originality/value This paper combines historical research with textual analysis to bring a unique historical perspective to information literacy, demonstrating that "fake news" is not a recent phenomenon, and that the tools to fight it may also lie in historical research.
  16. Paris, B.; Reynolds, R.; McGowan, C.: Sins of omission : critical informatics perspectives on privacy in e-learning systems in higher education (2022) 0.03
    0.031381607 = product of:
      0.062763214 = sum of:
        0.062763214 = product of:
          0.18828964 = sum of:
            0.18828964 = weight(_text_:themes in 548) [ClassicSimilarity], result of:
              0.18828964 = score(doc=548,freq=2.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.49721986 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=548)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The COVID-19 pandemic emptied classrooms across the globe and pushed administrators, students, educators, and parents into an uneasy alliance with online learning systems already committing serious privacy and intellectual property violations, and actively promoted the precarity of educational labor. In this article, we use methods and theories derived from critical informatics to examine Rutgers University's deployment of seven online learning platforms commonly used in higher education to uncover five themes that result from the deployment of corporate learning platforms. We conclude by suggesting ways ahead to meaningfully address the structural power and vulnerabilities extended by higher education's use of these platforms.
  17. Neudecker, C.; Zaczynska, K.; Baierer, K.; Rehm, G.; Gerber, M.; Moreno Schneider, J.: Methoden und Metriken zur Messung von OCR-Qualität für die Kuratierung von Daten und Metadaten (2021) 0.03
    0.029840102 = product of:
      0.059680205 = sum of:
        0.059680205 = product of:
          0.11936041 = sum of:
            0.11936041 = weight(_text_:dokumente in 369) [ClassicSimilarity], result of:
              0.11936041 = score(doc=369,freq=4.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.3978862 = fieldWeight in 369, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=369)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Durch die systematische Digitalisierung der Bestände in Bibliotheken und Archiven hat die Verfügbarkeit von Bilddigitalisaten historischer Dokumente rasant zugenommen. Das hat zunächst konservatorische Gründe: Digitalisierte Dokumente lassen sich praktisch nach Belieben in hoher Qualität vervielfältigen und sichern. Darüber hinaus lässt sich mit einer digitalisierten Sammlung eine wesentlich höhere Reichweite erzielen, als das mit dem Präsenzbestand allein jemals möglich wäre. Mit der zunehmenden Verfügbarkeit digitaler Bibliotheks- und Archivbestände steigen jedoch auch die Ansprüche an deren Präsentation und Nachnutzbarkeit. Neben der Suche auf Basis bibliothekarischer Metadaten erwarten Nutzer:innen auch, dass sie die Inhalte von Dokumenten durchsuchen können. Im wissenschaftlichen Bereich werden mit maschinellen, quantitativen Analysen von Textmaterial große Erwartungen an neue Möglichkeiten für die Forschung verbunden. Neben der Bilddigitalisierung wird daher immer häufiger auch eine Erfassung des Volltextes gefordert. Diese kann entweder manuell durch Transkription oder automatisiert mit Methoden der Optical Character Recognition (OCR) geschehen (Engl et al. 2020). Der manuellen Erfassung wird im Allgemeinen eine höhere Qualität der Zeichengenauigkeit zugeschrieben. Im Bereich der Massendigitalisierung fällt die Wahl aus Kostengründen jedoch meist auf automatische OCR-Verfahren.
  18. Lepsky, K.: Automatisches Indexieren (2023) 0.03
    0.029540192 = product of:
      0.059080385 = sum of:
        0.059080385 = product of:
          0.11816077 = sum of:
            0.11816077 = weight(_text_:dokumente in 781) [ClassicSimilarity], result of:
              0.11816077 = score(doc=781,freq=2.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.39388722 = fieldWeight in 781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=781)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Unter Indexierung versteht man die Zuordnung von inhaltskennzeichnenden Ausdrücken (Indextermen, Indexaten, Erschließungsmerkmalen) zu Dokumenten. Über die zugeteilten Indexterme soll ein gezieltes Auffinden der Dokumente ermöglicht werden. Indexterme können inhaltsbeschreibende Merkmale wie Notationen, Deskriptoren, kontrollierte oder freie Schlagwörter sein; es kann sich auch um reine Stichwörter handeln, die aus dem Text des Dokuments gewonnen werden. Eine Indexierung kann intellektuell, computerunterstützt oder automatisch erfolgen. Computerunterstützte Indexierungsverfahren kombinieren die intellektuelle Indexierung mit automatischen Vorarbeiten. Bei der automatischen Indexierung werden die Indexterme automatisch aus dem Dokumenttext ermittelt und dem Dokument zugeordnet. Automatische Indexierung bedient sich für die Verarbeitung der Zeichenketten im Dokument linguistischer und statistischer Verfahren.
  19. ¬Der Student aus dem Computer (2023) 0.03
    0.027931591 = product of:
      0.055863183 = sum of:
        0.055863183 = product of:
          0.111726366 = sum of:
            0.111726366 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.111726366 = score(doc=1079,freq=2.0), product of:
                0.20626599 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.058902346 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  20. Thelwall, M.; Foster, D.: Male or female gender-polarized YouTube videos are less viewed (2021) 0.03
    0.026898522 = product of:
      0.053797044 = sum of:
        0.053797044 = product of:
          0.16139112 = sum of:
            0.16139112 = weight(_text_:themes in 414) [ClassicSimilarity], result of:
              0.16139112 = score(doc=414,freq=2.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.42618844 = fieldWeight in 414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.046875 = fieldNorm(doc=414)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    As one of the world's most visited websites, YouTube is potentially influential for learning gendered attitudes. Nevertheless, despite evidence of gender influences within the site for some topics, the extent to which YouTube reflects or promotes male/female or other gender divides is unknown. This article analyses 10,211 YouTube videos published in 12 months from 2014 to 2015 using commenter-portrayed genders (inferred from usernames) and view counts from the end of 2019. Nonbinary genders are omitted for methodological reasons. Although there were highly male and female topics or themes (e.g., vehicles or beauty) and male or female gendering is the norm, videos with topics attracting both males and females tended to have more viewers (after approximately 5 years) than videos in male or female gendered topics. Similarly, within each topic, videos with gender balanced sets of commenters tend to attract more viewers. Thus, YouTube does not seem to be driving male-female gender differences.

Languages

  • e 85
  • d 40

Types

  • a 117
  • el 25
  • m 3
  • p 2
  • x 1
  • More… Less…