Search (150 results, page 1 of 8)

  • × year_i:[2020 TO 2030}
  1. Oliver, C: Introducing RDA : a guide to the basics after 3R (2021) 0.10
    0.09696904 = product of:
      0.2909071 = sum of:
        0.061100297 = weight(_text_:relationship in 716) [ClassicSimilarity], result of:
          0.061100297 = score(doc=716,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.26653278 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=716)
        0.22980681 = weight(_text_:datenmodell in 716) [ClassicSimilarity], result of:
          0.22980681 = score(doc=716,freq=4.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.6147067 = fieldWeight in 716, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.0390625 = fieldNorm(doc=716)
      0.33333334 = coord(2/6)
    
    Abstract
    Since Oliver's guide was first published in 2010, thousands of LIS students, records managers, and catalogers and other library professionals have relied on its clear, plainspoken explanation of RDA: Resource Description and Access as their first step towards becoming acquainted with the cataloging standard. Now, reflecting the changes to RDA after the completion of the 3R Project, Oliver brings her Special Report up to date. This essential primer concisely explains what RDA is, its basic features, and the main factors in its development describes RDA's relationship to the international standards and models that continue to influence its evolution provides an overview of the latest developments, focusing on the impact of the 3R Project, the results of aligning RDA with IFLA's Library Reference Model (LRM), and the outcomes of internationalization illustrates how information is organized in the post 3R Toolkit and explains how to navigate through this new structure; and discusses how RDA continues to enable improved resource discovery both in traditional and new applications, including the linked data environment.
    RSWK
    Bibliografische Daten / Datenmodell / Katalogisierung / Resource description and access / Theorie
    Subject
    Bibliografische Daten / Datenmodell / Katalogisierung / Resource description and access / Theorie
  2. Petersohn, S.: Neue Version 1.3. des KDSF-Standard für Forschungsinformationen veröffentlicht (2022) 0.04
    0.038301136 = product of:
      0.22980681 = sum of:
        0.22980681 = weight(_text_:datenmodell in 4219) [ClassicSimilarity], result of:
          0.22980681 = score(doc=4219,freq=4.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.6147067 = fieldWeight in 4219, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4219)
      0.16666667 = coord(1/6)
    
    Content
    "es gab viel Bewegung rund um den KDSF - Standard für Forschungsinformationen (Kerndatensatz Forschung) im Jahr 2022. Die Kommission für Forschungsinformationen in Deutschland (KFiD), ein aus 17 ehrenamtlichen Mitgliedern bestehendes Gremium zur Förderung des KDSF und Professionalisierung des Forschungsinformationswesens, hat eine Weiterentwicklung des KDSF zur Version 1.3 beschlossen und sich intensiv mit ihrem Arbeitsprogramm für die erste Amtsperiode befasst. In diesem Zuge wurden drei neue Arbeitsgruppen ins Leben gerufen: Die AG Weiterentwicklung des KDSF, die AG Datenabfragen im KDSF-Format und die AG Forschungsinformationsmanagement. Diese befassen sich mit der Aktualisierung und Ergänzung des KDSF sowie der Erstellung von mittel- und langfristigen Weiterentwicklungsplänen, der Stärkung von Datenabfragen im KDSF-Format sowie dem Abgleich von Informationsbedürfnissen mit der potentiellen Anwendbarkeit des KDSF in Berichtslegungsprozessen. Schließlich sollen die Mehrwerte des KDSF in beispielhaften, einrichtungs- und systemspezifischen Implementierungsvorhaben demonstriert werden. Ein erstes Ergebnis der AG Weiterentwicklung ist die bereits von Praktiker:innen und Anwender:innen lange erwartete Integration der Forschungsfeldklassifikation in den Kern des KDSF. Dadurch können Hochschulen und Forschungseinrichtungen zukünftig nicht nur ihre Forschungsaktivitäten entlang von Forschungsdisziplinen ausweisen, sondern auch interdisziplinäre bzw. gegenstands- und problembezogene Forschung abbilden. Dazu gehören zum Beispiel Forschung zu Nachhaltigkeit oder zur Digitalen Wirtschaft. Hierfür stehen nun insgesamt 72 Forschungsfelder zur Verfügung. Die Version 1.3 ist auf dem gewohnten Webauftritt des KDSF und das zugehörige Datenmodell nun auch auf Github zu finden. Spezifikation: https://www.kerndatensatz-forschung.de/index.php?id=spezifikation bzw. https://www.kerndatensatz-forschung.de/version1/Spezifikation_KDSF_v1_3.pdf Datenmodell: https://github.com/KFiD-G/KDSF Informationen zur KFiD finden Sie auf einem neuen Webauftritt unter www.kfid-online.de.
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.03773202 = product of:
      0.22639212 = sum of:
        0.22639212 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
          0.22639212 = score(doc=862,freq=2.0), product of:
            0.40282002 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047513504 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.16666667 = coord(1/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03144335 = product of:
      0.1886601 = sum of:
        0.1886601 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
          0.1886601 = score(doc=5669,freq=2.0), product of:
            0.40282002 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047513504 = queryNorm
            0.46834838 = fieldWeight in 5669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5669)
      0.16666667 = coord(1/6)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  5. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.03144335 = product of:
      0.1886601 = sum of:
        0.1886601 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
          0.1886601 = score(doc=1000,freq=2.0), product of:
            0.40282002 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047513504 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.16666667 = coord(1/6)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  6. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.03
    0.031095807 = product of:
      0.093287416 = sum of:
        0.061100297 = weight(_text_:relationship in 1012) [ClassicSimilarity], result of:
          0.061100297 = score(doc=1012,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.26653278 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.03218712 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
          0.03218712 = score(doc=1012,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.19345059 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
      0.33333334 = coord(2/6)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
  7. Bach, N.: ¬Die nächste PID-Evolution : selbstsouverän, datenschutzfreundlich, dezentral (2021) 0.03
    0.027082993 = product of:
      0.16249795 = sum of:
        0.16249795 = weight(_text_:datenmodell in 539) [ClassicSimilarity], result of:
          0.16249795 = score(doc=539,freq=2.0), product of:
            0.3738479 = queryWeight, product of:
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.047513504 = queryNorm
            0.43466327 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.8682456 = idf(docFreq=45, maxDocs=44218)
              0.0390625 = fieldNorm(doc=539)
      0.16666667 = coord(1/6)
    
    Abstract
    Dieser Beitrag behandelt den zuletzt vom W3C hervorgebrachten Standard für dezentrale Identi­fikatoren (Decentralized Identifiers, kurz: DIDs) in Bezug auf den Bereich des Forschungsdatenmanagements. Es wird dargelegt, dass die aktuell im wissenschaftlichen Publikationswesen häufig verwendeten persistenten Identifikatorensysteme (Persistent Identifiers, PIDs) wie Handle, DOI, ORCID und ROR aufgrund ihrer Zentralisierung fundamentale Probleme hinsichtlich der Daten­sicherheit, dem Datenschutz und bei der Sicherstellung der Datenintegrität aufweisen. Dem werden als mögliche Lösung das System der DIDs gegenübergestellt: eine neuartige Form von weltweit eindeutigen Identifikatoren, die durch jedes Individuum oder jede Organisation selbst generiert und auf jeder als vertrauenswürdig erachteten Plattform betrieben werden können. Blockchains oder andere Distributed-Legder-Technologien können dabei als vertrauenswürdige Datenregister fungieren, aber auch direkte Peer-to-Peer-Verbindungen, auf bestehende Internetprotokolle aufsetzende Methoden oder statische DIDs sind möglich. Neben dem Schema wird die technische Spezifikation im Sinne von Datenmodell und die Anwendung von DIDs erläutert sowie im Vergleich die Unterschiede zu zentralisierten PID-Systemen herausgestellt. Zuletzt wird der Zusammenhang mit dem zugrundeliegenden neuen Paradigma einer dezentralen Identität, der Self-Sovereign Identity, hergestellt. SSI repräsentiert ein gesamtes Ökosystem, in dem Entitäten ein kryptografisch gesichertes Vertrauensnetzwerk auf der Basis von DIDs und digitalen Identitätsnachweisen bilden, um dezentral manipulationsgesichert und datenschutzgerecht identitätsbezogene Daten auszutauschen. Zum Schluss der Abhandlung stellt der Autor fünf zuvor herausgearbeitete Anforderungen in Bezug auf eine zeitgemäße Umsetzung von persistenten Identifikatoren vor.
  8. Ma, X.; Xue, P.; Matta, N.; Chen, Q.: Fine-grained ontology reconstruction for crisis knowledge based on integrated analysis of temporal-spatial factors (2021) 0.02
    0.01763814 = product of:
      0.10582883 = sum of:
        0.10582883 = weight(_text_:relationship in 232) [ClassicSimilarity], result of:
          0.10582883 = score(doc=232,freq=6.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.46164837 = fieldWeight in 232, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=232)
      0.16666667 = coord(1/6)
    
    Abstract
    Previous studies on crisis knowledge organization mostly focused on the categorization of crisis knowledge without regarding its dynamic trend and temporal-spatial features. In order to emphasize the dynamic factors of crisis collaboration, a fine-grained crisis knowledge model is proposed by integrating temporal-spatial analysis based on ontology, which is one of the commonly used methods for knowledge organization. The reconstruction of ontologybased crisis knowledge will be implemented through three steps: analyzing temporal-spatial features of crisis knowledge, reconstructing crisis knowledge ontology, and verifying the temporal-spatial ontology. In the process of ontology reconstruction, the main classes and properties of the domain will be identified by investigating the crisis information resources. Meanwhile the fine-grained crisis ontology will be achieved at the level of characteristic representation of crisis knowledge including temporal relationship, spatial relationship, and semantic relationship. Finally, we conducted case addition and system implementation to verify our crisis knowledge model. This ontology-based knowledge organization method theoretically optimizes the static organizational structure of crisis knowledge, improving the flexibility of knowledge organization and efficiency of emergency response. In practice, the proposed fine-grained ontology is supposed to be more in line with the real situation of emergency collaboration and management. Moreover, it will also provide the knowledge base for decision-making during rescue process.
  9. Yin, H.; Zheng, S.; Yeoh, W.; Ren, J.: How online review richness impacts sales : an attribute substitution perspective (2021) 0.02
    0.01763814 = product of:
      0.10582883 = sum of:
        0.10582883 = weight(_text_:relationship in 257) [ClassicSimilarity], result of:
          0.10582883 = score(doc=257,freq=6.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.46164837 = fieldWeight in 257, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=257)
      0.16666667 = coord(1/6)
    
    Abstract
    Richer forms of online reviews such as videos or follow-on reviews convey 'additional information and can attract consumers' attention. However, prior studies focused mostly on the relationship between aggregated online reviews and sales. This paper investigates the impact of online review richness (i.e., reviews containing videos or follow-on reviews) on sales. Leveraging attribute substitution theory, we conjecture that online review richness can provide heuristic cues in the online shopping environment to help consumers make better purchase decisions. Using data from JD.com, we found that reviews containing either videos or follow-on reviews positively affect sales. In addition, different product types can also serve as heuristic cues to replace target cues, which can further affect how different forms of online review richness affect sales. We found that the impact of online review richness on sales is stronger for utilitarian products than for hedonic products, and stronger for negatively commented products than for positively commented products. Moreover, we conducted two online experiments and confirmed that the causal relationship is from online review richness to sales. The research findings offer practical implications for online retailers and constitute one of the first steps toward a better understanding of the relationship between online review richness and sales.
  10. Deng, Z.; Deng, Z.; Fan, G.; Wang, B.; Fan, W.(P.); Liu, S.: More is better? : understanding the effects of online interactions on patients health anxiety (2023) 0.02
    0.01763814 = product of:
      0.10582883 = sum of:
        0.10582883 = weight(_text_:relationship in 1082) [ClassicSimilarity], result of:
          0.10582883 = score(doc=1082,freq=6.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.46164837 = fieldWeight in 1082, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1082)
      0.16666667 = coord(1/6)
    
    Abstract
    Online health platforms play an important role in chronic disease management. Patients participate in online health platforms to receive and provide health-related support from each other. However, there remains a debate about whether the influence of social interaction on patient health anxiety is linearly positive. Based on uncertainty, information overload, and the theory of motivational information management, we develop and test a model considering a potential curvilinear relationship between social interaction and health anxiety, as well as a moderating effect of health literacy. We collect patient interaction data from an online health platform based on chronic disease management in China and use text mining and econometrics to test our hypotheses. Specifically, we find an inverted U-shaped relationship between informational provision and health anxiety. Our results also show that information receipt and emotion provision have U-shaped relationships with health anxiety. Interestingly, health literacy can effectively alleviate the U-shaped relationship between information receipt and health anxiety. These findings not only provide new insights into the literature on online patient interactions but also provide decision support for patients and platform managers.
  11. Solc, R.: ¬The use of various models of work distribution in the analysis of the Czech system of evaluation of research (2020) 0.02
    0.016293414 = product of:
      0.097760476 = sum of:
        0.097760476 = weight(_text_:relationship in 5897) [ClassicSimilarity], result of:
          0.097760476 = score(doc=5897,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.42645246 = fieldWeight in 5897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0625 = fieldNorm(doc=5897)
      0.16666667 = coord(1/6)
    
    Abstract
    This article builds on our previous work, when we critically analyzed some aspects of the research evaluation system valid in the Czech Republic until 2017. This article also focuses on the evaluation of articles in journals with IF, but develops the relationship between so-called RIV-points allocated by the system and the amount of work done, using different models of work distribution. The results generally support the conclusions of the original study.
  12. ¬Der Student aus dem Computer (2023) 0.02
    0.0150206555 = product of:
      0.09012393 = sum of:
        0.09012393 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
          0.09012393 = score(doc=1079,freq=2.0), product of:
            0.16638419 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.047513504 = queryNorm
            0.5416616 = fieldWeight in 1079, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=1079)
      0.16666667 = coord(1/6)
    
    Date
    27. 1.2023 16:22:55
  13. Haggar, E.: Fighting fake news : exploring George Orwell's relationship to information literacy (2020) 0.01
    0.014401479 = product of:
      0.08640887 = sum of:
        0.08640887 = weight(_text_:relationship in 5978) [ClassicSimilarity], result of:
          0.08640887 = score(doc=5978,freq=4.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3769343 = fieldWeight in 5978, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5978)
      0.16666667 = coord(1/6)
    
    Abstract
    The purpose of this paper is to analyse George Orwell's diaries through an information literacy lens. Orwell is well known for his dedication to freedom of speech and objective truth, and his novel Nineteen Eighty-Four is often used as a lens through which to view the fake news phenomenon. This paper will examine Orwell's diaries in relation to UNESCO's Five Laws of Media and Information Literacy to examine how information literacy concepts can be traced in historical documents. Design/methodology/approach This paper will use a content analysis method to explore Orwell's relationship to information literacy. Two of Orwell's political diaries from the period 1940-42 were coded for key themes related to the ways in which Orwell discusses and evaluates information and news. These themes were then compared to UNESCO Five Laws of Media and Information Literacy. Textual analysis software NVivo 12 was used to perform keyword searches and word frequency queries in the digitised diaries. Findings The findings show that while Orwell's diaries and the Five Laws did not share terminology, they did share ideas on bias and access to information. They also extend the history of information literacy research and practice by illustrating how concerns about the need to evaluate information sources are represented within historical literature. Originality/value This paper combines historical research with textual analysis to bring a unique historical perspective to information literacy, demonstrating that "fake news" is not a recent phenomenon, and that the tools to fight it may also lie in historical research.
  14. Gorichanaz, T.: Relating information seeking and use to intellectual humility (2022) 0.01
    0.014401479 = product of:
      0.08640887 = sum of:
        0.08640887 = weight(_text_:relationship in 543) [ClassicSimilarity], result of:
          0.08640887 = score(doc=543,freq=4.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3769343 = fieldWeight in 543, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=543)
      0.16666667 = coord(1/6)
    
    Abstract
    Virtue epistemology offers a yet-untapped path for ethical development in information science. This paper presents two empirical studies on intellectual humility (IH), a cornerstone intellectual virtue. Centrally, IH is a matter of being open to the possibility that one may be misinformed or uninformed; it involves accurately valuing one's beliefs according to the evidence. The studies presented in this paper explore the relationship between IH and people's information seeking and use. First, a correlational questionnaire study was conducted with 201 participants considering a recent, real-life task; second, a concurrent thinkaloud study was conducted with 8 participants completing 3 online search tasks. These studies give further color to prior assertions that people with higher IH engage in more information seeking. The results show, for instance, that those with higher IH may actually favor more easily accessible information sources and that some dimensions of IH, such as modesty and engagement, may be most important to information seeking. These findings offer a nuanced understanding of the relationship between IH and information behavior and practices. They suggest avenues for further research, and they may be applied in educational contexts and sociotechnical design.
  15. Huang, S.; Qian, J.; Huang, Y.; Lu, W.; Bu, Y.; Yang, J.; Cheng, Q.: Disclosing the relationship between citation structure and future impact of a publication (2022) 0.01
    0.014401479 = product of:
      0.08640887 = sum of:
        0.08640887 = weight(_text_:relationship in 621) [ClassicSimilarity], result of:
          0.08640887 = score(doc=621,freq=4.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3769343 = fieldWeight in 621, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=621)
      0.16666667 = coord(1/6)
    
    Abstract
    Each section header of an article has its distinct communicative function. Citations from distinct sections may be different regarding citing motivation. In this paper, we grouped section headers with similar functions as a structural function and defined the distribution of citations from structural functions for a paper as its citation structure. We aim to explore the relationship between citation structure and the future impact of a publication and disclose the relative importance among citations from different structural functions. Specifically, we proposed two citation counting methods and a citation life cycle identification method, by which the regression data were built. Subsequently, we employed a ridge regression model to predict the future impact of the paper and analyzed the relative weights of regressors. Based on documents collected from the Association for Computational Linguistics Anthology website, our empirical experiments disclosed that functional structure features improve the prediction accuracy of citation count prediction and that there exist differences among citations from different structural functions. Specifically, at the early stage of citation lifetime, citations from Introduction and Method are particularly important for perceiving future impact of papers, and citations from Result and Conclusion are also vital. However, early accumulation of citations from the Background seems less important.
  16. Melo, M.; March, L.; Hirsh, K.; Arnsberg, E.: Description framework of makerspaces : examining the relationship between spatial arrangement and diverse user populations (2023) 0.01
    0.014401479 = product of:
      0.08640887 = sum of:
        0.08640887 = weight(_text_:relationship in 944) [ClassicSimilarity], result of:
          0.08640887 = score(doc=944,freq=4.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3769343 = fieldWeight in 944, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=944)
      0.16666667 = coord(1/6)
    
    Abstract
    As makerspaces continue to proliferate in academic and public libraries, researchers and educators are increasingly concerned with ensuring these STEM-rich learning environments are inclusive to historically marginalized student communities. This article offers a new framework, the Description Framework of Makerspaces, to outline the relationship between the spatial qualities of makerspaces and the user population it attracts. This study represents the first phase of a 5-year research program dedicated to analyzing the everyday life information seeking practices that students (un)intentionally make when deciding to engage with a STEM­rich learning environment such as a makerspace. Using constructivist grounded framework to analyze interview data from 17 academic makerspace leaders, we theorize 2 propositions from the main findings: (a) the act of defining a makerspace is difficult and in tension with several imaginings of a makerspace: imagined, ideal, and experienced and (b) a makerspace is significantly composed of affective features that are often unarticulated and abstract. By conceptualizing makerspaces as environments that are configured by both physical and affective characteristics, we reveal insights regarding a baseline conceptualization of the features of a conventional academic makerspace and the design decisions that makerspace leaders make and are confronted with.
  17. Thelwall, M.; Kousha, K.; Stuart, E.; Makita, M.; Abdoli, M.; Wilson, P.; Levitt, J.: In which fields are citations indicators of research quality? (2023) 0.01
    0.014401479 = product of:
      0.08640887 = sum of:
        0.08640887 = weight(_text_:relationship in 1033) [ClassicSimilarity], result of:
          0.08640887 = score(doc=1033,freq=4.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3769343 = fieldWeight in 1033, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1033)
      0.16666667 = coord(1/6)
    
    Abstract
    Citation counts are widely used as indicators of research quality to support or replace human peer review and for lists of top cited papers, researchers, and institutions. Nevertheless, the relationship between citations and research quality is poorly evidenced. We report the first large-scale science-wide academic evaluation of the relationship between research quality and citations (field normalized citation counts), correlating them for 87,739 journal articles in 34 field-based UK Units of Assessment (UoA). The two correlate positively in all academic fields, from very weak (0.1) to strong (0.5), reflecting broadly linear relationships in all fields. We give the first evidence that the correlations are positive even across the arts and humanities. The patterns are similar for the field classification schemes of Scopus and Dimensions.ai, although varying for some individual subjects and therefore more uncertain for these. We also show for the first time that no field has a citation threshold beyond which all articles are excellent quality, so lists of top cited articles are not pure collections of excellence, and neither is any top citation percentile indicator. Thus, while appropriately field normalized citations associate positively with research quality in all fields, they never perfectly reflect it, even at high values.
  18. Smiraglia, R.P.: Referencing as evidentiary : an editorial (2020) 0.01
    0.014256737 = product of:
      0.08554042 = sum of:
        0.08554042 = weight(_text_:relationship in 5729) [ClassicSimilarity], result of:
          0.08554042 = score(doc=5729,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3731459 = fieldWeight in 5729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5729)
      0.16666667 = coord(1/6)
    
    Abstract
    The referencing habits of scholars, having abandoned physical bibliography for harvesting of digital resources, are in crisis, endangering the bibliographical infrastructure supporting the domain of knowledge organization. Research must be carefully managed and its circumstances controlled. Bibliographical replicability is one important part of the social role of scholarship. References in Knowledge Organization volume 45 (2018) were compiled and analyzed to help visualize the state of referencing in the KO domain. The dependence of science on the ability to replicate is even more critical in a global distributed digital environment. There is great richness in KO that make it even more critical that our scholarly community tend to the relationship between bibliographical verity and the very replicability that is allowing the field to grow theoretically over time.
  19. Aalberg, T.; O'Neill, E.; Zumer, M.: Extending the LRM Model to integrating resources (2021) 0.01
    0.014256737 = product of:
      0.08554042 = sum of:
        0.08554042 = weight(_text_:relationship in 295) [ClassicSimilarity], result of:
          0.08554042 = score(doc=295,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3731459 = fieldWeight in 295, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0546875 = fieldNorm(doc=295)
      0.16666667 = coord(1/6)
    
    Abstract
    Integrating resources are distinct in that they change over time in such a way that their previous content is replaced with updated content. This study examines how integrating resources can be modeled using the entities and relationships of the IFLA Library Reference Model (LRM) and clarifies how they can be identified. While monographs have been extensively analyzed, integrating resources have received very little attention. Applying the model unmodified to integrating resources is neither practical nor theoretically sound. With the addition of two proposed relationships, the model can be extended to accommodate the diachronic relationship intrinsic between expressions and manifestations exhibited by integrating resources.
  20. Snow, K.; Dunbar, A.W.: Advancing the relationship between critical cataloging and critical race theory (2022) 0.01
    0.014256737 = product of:
      0.08554042 = sum of:
        0.08554042 = weight(_text_:relationship in 1145) [ClassicSimilarity], result of:
          0.08554042 = score(doc=1145,freq=2.0), product of:
            0.2292412 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.047513504 = queryNorm
            0.3731459 = fieldWeight in 1145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1145)
      0.16666667 = coord(1/6)
    

Languages

  • e 119
  • d 31

Types

  • a 141
  • el 21
  • m 4
  • p 2
  • x 1
  • More… Less…