Search (41 results, page 1 of 3)

  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. Hausser, R.: Language and nonlanguage cognition (2021) 0.01
    0.008975883 = product of:
      0.071807064 = sum of:
        0.071807064 = weight(_text_:case in 255) [ClassicSimilarity], result of:
          0.071807064 = score(doc=255,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.41216385 = fieldWeight in 255, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=255)
      0.125 = coord(1/8)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  2. Aleksander, K.: Antrag zur Aufnahme des Sachbegriffs "Gender" in die GemeinsameNormdatei (GND) der Deutschen Nationalbibliothek (DNB) : Inhaltserschließung in Bibliotheken und alternative Zukünfte. (2022) 0.01
    0.0073941024 = product of:
      0.05915282 = sum of:
        0.05915282 = weight(_text_:studies in 676) [ClassicSimilarity], result of:
          0.05915282 = score(doc=676,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.37408823 = fieldWeight in 676, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=676)
      0.125 = coord(1/8)
    
    Abstract
    Zu den Qualitätsanforderungen von Normdateien als Wissensorganisationssysteme gehört neben der eindeutigen Bestimmung des Begriffsumfangs einzelner Konzepte, der konsistenten terminologischen Kontrolle und Anreicherung durch Synonyme auch die Aktualität und Wissenschaftsnähe der Terminologie. Bereits in der sich in der BRD seit den 1970er-Jahren entwickelnden Frauenforschung spielte der Begriff "Geschlecht" eine zentrale Rolle. Der in den 1990er-/2000er-Jahren hier entstehenden Geschlechterforschung gab er ihren Namen.Seit dieser Zeit ist auch der Fachbegriff "Gender" aus der angloamerikanischen Forschung in der deutschen Geschlechterforschung/Gender Studies zur zentralen Kategorie geworden. In der GND ist er als eigenständiger Sachbegriff bisher nicht vorhanden.Da (nicht nur) die Geschlechterforschung/Gender Studies das Schlagwort Gender zur Beschlagwortung ihrer Fachliteratur benötigt, schlagen wir seine Aufnahme in die GND vor.
  3. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.01
    0.0065561733 = product of:
      0.052449387 = sum of:
        0.052449387 = product of:
          0.15734816 = sum of:
            0.15734816 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.15734816 = score(doc=5669,freq=2.0), product of:
                0.3359639 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962768 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  4. Broughton, V.: Faceted classification in support of diversity : the role of concepts and terms in representing religion (2020) 0.01
    0.0063469075 = product of:
      0.05077526 = sum of:
        0.05077526 = weight(_text_:case in 5992) [ClassicSimilarity], result of:
          0.05077526 = score(doc=5992,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 5992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=5992)
      0.125 = coord(1/8)
    
    Abstract
    The paper examines the development of facet analysis as a methodology and the role it plays in building classifications and other knowledge-organization tools. The use of categorical analysis in areas other than library and information science is also considered. The suitability of the faceted approach for humanities documentation is explored through a critical description of the FATKS (Facet Analytical Theory in Managing Knowledge Structure for Humanities) project carried out at University College London. This research focused on building a conceptual model for the subject of religion together with a relational database and search-and-browse interfaces that would support some degree of automatic classification. The paper concludes with a discussion of the differences between the conceptual model and the vocabulary used to populate it, and how, in the case of religion, the choice of terminology can create an apparent bias in the system.
  5. Baines, D.; Elliott, R.J.: Defining misinformation, disinformation and malinformation : an urgent need for clarity during the COVID-19 infodemic (2020) 0.01
    0.0061617517 = product of:
      0.049294014 = sum of:
        0.049294014 = weight(_text_:studies in 5853) [ClassicSimilarity], result of:
          0.049294014 = score(doc=5853,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 5853, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5853)
      0.125 = coord(1/8)
    
    Abstract
    COVID-19 is an unprecedented global health crisis that will have immeasurable consequences for our economic and social well-being. Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, stated "We're not just fighting an epidemic; we're fighting an infodemic". Currently, there is no robust scientific basis to the existing definitions of false information used in the fight against the COVID-19infodemic. The purpose of this paper is to demonstrate how the use of a novel taxonomy and related model (based upon a conceptual framework that synthesizes insights from information science, philosophy, media studies and politics) can produce new scientific definitions of mis-, dis- and malinformation. We undertake our analysis from the viewpoint of information systems research. The conceptual approach to defining mis-,dis- and malinformation can be applied to a wide range of empirical examples and, if applied properly, may prove useful in fighting the COVID-19 infodemic. In sum, our research suggests that: (i) analyzing all types of information is important in the battle against the COVID-19 infodemic; (ii) a scientific approach is required so that different methods are not used by different studies; (iii) "misinformation", as an umbrella term, can be confusing and should be dropped from use; (iv) clear, scientific definitions of information types will be needed going forward; (v) malinformation is an overlooked phenomenon involving reconfigurations of the truth.
  6. Tramullas, J.: Temas y métodos de investigación en Ciencia de la Información, 2000-2019 : Revisión bibliográfica (2020) 0.01
    0.006099823 = product of:
      0.048798583 = sum of:
        0.048798583 = weight(_text_:studies in 5929) [ClassicSimilarity], result of:
          0.048798583 = score(doc=5929,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.30860704 = fieldWeight in 5929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5929)
      0.125 = coord(1/8)
    
    Abstract
    A systematic literature review is carried out, detailing the research topics and the methods and techniques used in information science in studies published between 2000 and 2019. The results obtained allow us to affirm that there is no consensus on the core topics of information science, as these evolve and change dynamically in relation to other disciplines, and with the dominant social and cultural contexts. With regard to the research methods and techniques, it can be stated that they have mostly been adopted from social sciences, with the addition of numerical methods, especially in the fields of bibliometric and scientometric research.
  7. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.01
    0.0060372595 = product of:
      0.048298076 = sum of:
        0.048298076 = weight(_text_:studies in 851) [ClassicSimilarity], result of:
          0.048298076 = score(doc=851,freq=6.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.30544177 = fieldWeight in 851, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
      0.125 = coord(1/8)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  8. Frederick, D.E.: ChatGPT: a viral data-driven disruption in the information environment (2023) 0.01
    0.005906073 = product of:
      0.047248583 = sum of:
        0.047248583 = weight(_text_:libraries in 983) [ClassicSimilarity], result of:
          0.047248583 = score(doc=983,freq=8.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.36295068 = fieldWeight in 983, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=983)
      0.125 = coord(1/8)
    
    Abstract
    This study aims to introduce librarians to ChatGPT and challenge them to think about how it fits into their work and what learning they will need to do in order to stay relevant in the realm of artificial intelligence. Design/methodology/approach Popular and scientific media sources were monitored over the course of two months to gather current discussions about the uses of and opinions about ChatGPT. This was analyzed in light of historical developments in education and libraries. Additional sources of information on the topic were described and discussed so that the issue is made relevant to librarians and libraries. Findings The potential risks and benefits of ChatGPT are highly relevant for librarians but also currently not fully understood. We are in a very early stage of understanding and using this technology but it does appear to have the possibility of becoming disruptive to libraries as well as many other aspects of life. Originality/value ChatGPT-3 has only been publicly available since the end of November 2022. We are just now starting to take a deeper dive into what this technology means for libraries. This paper is one of the early ones that provide librarians with some direction in terms of where to focus their interest and attention in learning about it.
  9. Aitchison, C.R.: Cataloging virtual reality artworks: challenges and future prospects (2021) 0.01
    0.0058467137 = product of:
      0.04677371 = sum of:
        0.04677371 = weight(_text_:libraries in 711) [ClassicSimilarity], result of:
          0.04677371 = score(doc=711,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.35930282 = fieldWeight in 711, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=711)
      0.125 = coord(1/8)
    
    Abstract
    In 2019, Pepperdine Libraries acquired two virtual reality artworks by filmmaker and artist Paisley Smith: Homestay and Unceded Territories. To bring awareness to these pieces, Pepperdine Libraries added these works to the library catalog, creating bibliographic records for both films. There were many challenges and considerations in cataloging virtual reality art, including factors such as the nature of the work, the limits found in Resource Description and Access (RDA) and MARC, and providing access to these works. This paper discusses these topics, as well as provides recommendations for potential future standards for cataloging virtual works.
  10. Babcock, K.; Lee, S.; Rajakumar, J.; Wagner, A.: Providing access to digital collections (2020) 0.01
    0.005114809 = product of:
      0.040918473 = sum of:
        0.040918473 = weight(_text_:libraries in 5855) [ClassicSimilarity], result of:
          0.040918473 = score(doc=5855,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.3143245 = fieldWeight in 5855, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5855)
      0.125 = coord(1/8)
    
    Abstract
    The University of Toronto Libraries is currently reviewing technology to support its Collections U of T service. Collections U of T provides search and browse access to 375 digital collections (and over 203,000 digital objects) at the University of Toronto Libraries. Digital objects typically include special collections material from the university as well as faculty digital collections, all with unique metadata requirements. The service is currently supported by IIIF-enabled Islandora, with one Fedora back end and multiple Drupal sites per parent collection (see attached image). Like many institutions making use of Islandora, UTL is now confronted with Drupal 7 end of life and has begun to investigate a migration path forward. This article will summarise the Collections U of T functional requirements and lessons learned from our current technology stack. It will go on to outline our research to date for alternate solutions. The article will review both emerging micro-service solutions, as well as out-of-the-box platforms, to provide an overview of the digital collection technology landscape in 2019. Note that our research is focused on reviewing technology solutions for providing access to digital collections, as preservation services are offered through other services at the University of Toronto Libraries.
  11. Melikov, S.; Eitel, C.: Informationskompetenz : eine Schlüsselkompetenz im Wandel (2021) 0.00
    0.0047248583 = product of:
      0.037798867 = sum of:
        0.037798867 = weight(_text_:libraries in 300) [ClassicSimilarity], result of:
          0.037798867 = score(doc=300,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.29036054 = fieldWeight in 300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0625 = fieldNorm(doc=300)
      0.125 = coord(1/8)
    
    Abstract
    Die Universitätsbibliothek Basel bietet seit vielen Jahren Lehrveranstaltungen in Informationskompetenz an. Aktuell werden diese Kurse inhaltlich, konzeptionell und methodisch erweitert und neugestaltet. Die praxisbezogene Einführung des Frameworks for Information Literacy for Higher Education, verabschiedet von der Association of College and Research Libraries, spielt dabei unter anderem eine bedeutende Rolle.
  12. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.00
    0.0043570166 = product of:
      0.034856133 = sum of:
        0.034856133 = weight(_text_:studies in 1084) [ClassicSimilarity], result of:
          0.034856133 = score(doc=1084,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 1084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
      0.125 = coord(1/8)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
  13. Wolf, S.: Automating authority control processes (2020) 0.00
    0.0041342513 = product of:
      0.03307401 = sum of:
        0.03307401 = weight(_text_:libraries in 5680) [ClassicSimilarity], result of:
          0.03307401 = score(doc=5680,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25406548 = fieldWeight in 5680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5680)
      0.125 = coord(1/8)
    
    Abstract
    Authority control is an important part of cataloging since it helps provide consistent access to names, titles, subjects, and genre/forms. There are a variety of methods for providing authority control, ranging from manual, time-consuming processes to automated processes. However, the automated processes often seem out of reach for small libraries when it comes to using a pricey vendor or expert cataloger. This paper introduces ideas on how to handle authority control using a variety of tools, both paid and free. The author describes how their library handles authority control; compares vendors and programs that can be used to provide varying levels of authority control; and demonstrates authority control using MarcEdit.
  14. Shiri, A.; Kelly, E.J.; Kenfield, A.; Woolcott, L.; Masood, K.; Muglia, C.; Thompson, S.: ¬A faceted conceptualization of digital object reuse in digital repositories (2020) 0.00
    0.0041342513 = product of:
      0.03307401 = sum of:
        0.03307401 = weight(_text_:libraries in 48) [ClassicSimilarity], result of:
          0.03307401 = score(doc=48,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25406548 = fieldWeight in 48, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
      0.125 = coord(1/8)
    
    Abstract
    In this paper, we provide an introduction to the concept of digital object reuse and its various connotations in the context of current digital libraries, archives, and repositories. We will then propose a faceted categorization of the various types, contexts, and cases for digital object reuse in order to facilitate understanding and communication and to provide a conceptual framework for the assessment of digital object reuse by various cultural heritage and cultural memory organizations.
  15. Schoenbeck, O.; Schröter, M.; Werr, N.: Framework Informationskompetenz in der Hochschulbildung (2021) 0.00
    0.0041342513 = product of:
      0.03307401 = sum of:
        0.03307401 = weight(_text_:libraries in 298) [ClassicSimilarity], result of:
          0.03307401 = score(doc=298,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25406548 = fieldWeight in 298, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=298)
      0.125 = coord(1/8)
    
    Abstract
    Im Mittelpunkt dieses Beitrags steht das 2016 von der Association of College & Research Libraries (ACRL) veröffentlichte Framework for Information Literacy for Higher Education, dessen Kernideen und Entwicklung aus Vorläufern wie den 2000 von der ACRL publizierten Information Literacy Competency Standards for Higher Education heraus skizziert werden. Die Rezeptionsgeschichte dieser Standards im deutschen Sprachraum wird vor dem Hintergrund der Geschichte ihrer (partiellen) Übersetzung nachgezeichnet und hieraus das Potenzial abgeleitet, das die nun vorliegende vollständige Übersetzung des Framework ins Deutsche für eine zeitgemäße Förderung von Informationskompetenz bietet. Die vielfältigen Herausforderungen einer solchen Übersetzung werden durch Einblicke in die Übersetzungswerkstatt exemplarisch reflektiert.
  16. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.00
    0.0040267524 = product of:
      0.03221402 = sum of:
        0.03221402 = product of:
          0.06442804 = sum of:
            0.06442804 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.06442804 = score(doc=4156,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    2. 3.2020 14:08:22
  17. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.00
    0.0039860546 = product of:
      0.031888437 = sum of:
        0.031888437 = product of:
          0.06377687 = sum of:
            0.06377687 = weight(_text_:area in 976) [ClassicSimilarity], result of:
              0.06377687 = score(doc=976,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32663327 = fieldWeight in 976, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.046875 = fieldNorm(doc=976)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.
  18. Dhillon, P.; Singh, M.: ¬An extended ontology model for trust evaluation using advanced hybrid ontology (2023) 0.00
    0.0039860546 = product of:
      0.031888437 = sum of:
        0.031888437 = product of:
          0.06377687 = sum of:
            0.06377687 = weight(_text_:area in 981) [ClassicSimilarity], result of:
              0.06377687 = score(doc=981,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32663327 = fieldWeight in 981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.046875 = fieldNorm(doc=981)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    In the blooming area of Internet technology, the concept of Internet-of-Things (IoT) holds a distinct position that interconnects a large number of smart objects. In the context of social IoT (SIoT), the argument of trust and reliability is evaluated in the presented work. The proposed framework is divided into two blocks, namely Verification Block (VB) and Evaluation Block (EB). VB defines various ontology-based relationships computed for the objects that reflect the security and trustworthiness of an accessed service. While, EB is used for the feedback analysis and proves to be a valuable step that computes and governs the success rate of the service. Support vector machine (SVM) is applied to categorise the trust-based evaluation. The security aspect of the proposed approach is comparatively evaluated for DDoS and malware attacks in terms of success rate, trustworthiness and execution time. The proposed secure ontology-based framework provides better performance compared with existing architectures.
  19. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    0.0037023628 = product of:
      0.029618902 = sum of:
        0.029618902 = weight(_text_:case in 53) [ClassicSimilarity], result of:
          0.029618902 = score(doc=53,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.17000891 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.125 = coord(1/8)
    
    Content
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.
  20. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.00
    0.0035436437 = product of:
      0.02834915 = sum of:
        0.02834915 = weight(_text_:libraries in 39) [ClassicSimilarity], result of:
          0.02834915 = score(doc=39,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
      0.125 = coord(1/8)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.