Search (136 results, page 1 of 7)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.34
    0.33914855 = product of:
      0.5935099 = sum of:
        0.059350993 = product of:
          0.17805298 = sum of:
            0.17805298 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.17805298 = score(doc=862,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.17805298 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17805298 = score(doc=862,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.17805298 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17805298 = score(doc=862,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.17805298 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17805298 = score(doc=862,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5714286 = coord(4/7)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.28
    0.28262377 = product of:
      0.4945916 = sum of:
        0.04945916 = product of:
          0.14837748 = sum of:
            0.14837748 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.14837748 = score(doc=1000,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.14837748 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14837748 = score(doc=1000,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.14837748 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14837748 = score(doc=1000,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.14837748 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14837748 = score(doc=1000,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5714286 = coord(4/7)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Ma, Y.: Relatedness and compatibility : the concept of privacy in Mandarin Chinese and American English corpora (2023) 0.03
    0.027563075 = product of:
      0.09647076 = sum of:
        0.081282035 = weight(_text_:interpretation in 887) [ClassicSimilarity], result of:
          0.081282035 = score(doc=887,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 887, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=887)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 887) [ClassicSimilarity], result of:
              0.030377446 = score(doc=887,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=887)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This study investigates how privacy as an ethical concept exists in two languages: Mandarin Chinese and American English. The exploration relies on two genres of corpora from 10 years: social media posts and news articles, 2010-2019. A mixed-methods approach combining structural topic modeling (STM) and human interpretation were used to work with the data. Findings show various privacy-related topics across the two languages. Moreover, some of these different topics revealed fundamental incompatibilities for understanding privacy across these two languages. In other words, some of the variations of topics do not just reflect contextual differences; they reveal how the two languages value privacy in different ways that can relate back to the society's ethical tradition. This study is one of the first empirically grounded intercultural explorations of the concept of privacy. It has shown that natural language is promising to operationalize intercultural and comparative privacy research, and it provides an examination of the concept as it is understood in these two languages.
    Date
    22. 1.2023 18:59:40
  4. Bargmann, S.; Blumesberger, S.; Gruber, A.; Luef, E.; Steltzer, R.: Sacherschließung geschlechtergerecht?! : Rückblick auf den Online-Workshop am 11. Mai 2022 und Aufruf zu gemeinsamen Aktivitäten (2022) 0.01
    0.014740471 = product of:
      0.10318329 = sum of:
        0.10318329 = sum of:
          0.06774294 = weight(_text_:anwendung in 172) [ClassicSimilarity], result of:
            0.06774294 = score(doc=172,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.37443897 = fieldWeight in 172, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.0546875 = fieldNorm(doc=172)
          0.035440356 = weight(_text_:22 in 172) [ClassicSimilarity], result of:
            0.035440356 = score(doc=172,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.2708308 = fieldWeight in 172, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=172)
      0.14285715 = coord(1/7)
    
    Abstract
    Der Beitrag blickt zurück auf den Online-Workshop "Geschlechtergerechte Sacherschließung", bei dem im Mai 2022 unterschiedliche Perspektiven der Geschlechtergerechtigkeit in der inhaltlichen Erschließung diskutiert wurden. Neben sprach- und bibliothekswissenschaftlichen Grundsatzfragen wurde die Gemeinsame Normdatei (GND) samt Regelwerken zur Gestaltung und Anwendung unter die geschlechtsspezifische Lupe genommen, ebenso wie feministische Fachvokabulare sowie Gender-Aspekte in der bibliothekarischen Aus- und Weiterbildung. Die Veranstaltung verstand sich als Auftakt, der Bericht beinhaltet einen Aufruf zu weiteren gemeinsamen Aktivitäten.
    Date
    15. 2.2023 14:30:22
  5. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.01
    0.013684542 = product of:
      0.09579179 = sum of:
        0.09579179 = weight(_text_:interpretation in 1048) [ClassicSimilarity], result of:
          0.09579179 = score(doc=1048,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.44751403 = fieldWeight in 1048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper offers a novel interpretation of Hans Jonas' analysis of metabolism, the centrepiece of Jonas' philosophy of organism, in relation to recent controversies regarding the phenomenological dimension of life-mind continuity as understood within 'autopoietic' enactivism (AE). Jonas' philosophy of organism chiefly inspired AE's development of what we might call 'the phenomenological life-mind continuity thesis' (PLMCT), the claim that certain phenomenological features of human experience are central to a proper scientific understanding of both life and mind, and as such central features of all living organisms. After discussing the understanding of PLMCT within AE, and recent criticisms thereof, I develop a reading of Jonas' analysis of metabolism, in light of previous commentators, which emphasizes its systematicity and transcendental flavour. The central thought is that, for Jonas, the attribution of certain phenomenological features is a necessary precondition for our understanding of the possibility of metabolism, rather than being derivable from metabolism itself. I argue that my interpretation strengthens Jonas' contribution to AE's justification for ascribing certain phenomenological features to life across the board. However, it also emphasises the need to complement Jonas' analysis with an explanatory account of organic identity in order to vindicate these phenomenological ascriptions in a scientific context.
  6. Ruthven, I.: Resonance and the experience of relevance (2021) 0.01
    0.013547006 = product of:
      0.09482904 = sum of:
        0.09482904 = weight(_text_:interpretation in 211) [ClassicSimilarity], result of:
          0.09482904 = score(doc=211,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=211)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article, I propose the concept of resonance as a useful one for describing what it means to experience relevance. Based on an extensive interdisciplinary review, I provide a novel framework that presents resonance as a spectrum of experience with a multitude of outcomes ranging from a sense of harmony and coherence to life transformation. I argue that resonance has different properties to the more traditional interpretation of relevance and provides a better system of explanation of what it means to experience relevance. I show how traditional approaches to relevance and resonance work in a complementary fashion and outline how resonance may present distinct new lines of research into relevance theory.
  7. Al-Khatib, K.; Ghosa, T.; Hou, Y.; Waard, A. de; Freitag, D.: Argument mining for scholarly document processing : taking stock and looking ahead (2021) 0.01
    0.013547006 = product of:
      0.09482904 = sum of:
        0.09482904 = weight(_text_:interpretation in 568) [ClassicSimilarity], result of:
          0.09482904 = score(doc=568,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=568)
      0.14285715 = coord(1/7)
    
    Abstract
    Argument mining targets structures in natural language related to interpretation and persuasion. Most scholarly discourse involves interpreting experimental evidence and attempting to persuade other scientists to adopt the same conclusions, which could benefit from argument mining techniques. However, While various argument mining studies have addressed student essays and news articles, those that target scientific discourse are still scarce. This paper surveys existing work in argument mining of scholarly discourse, and provides an overview of current models, data, tasks, and applications. We identify a number of key challenges confronting argument mining in the scientific domain, and suggest some possible solutions and future directions.
  8. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.01
    0.01263469 = product of:
      0.088442825 = sum of:
        0.088442825 = sum of:
          0.058065377 = weight(_text_:anwendung in 266) [ClassicSimilarity], result of:
            0.058065377 = score(doc=266,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.3209477 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.046875 = fieldNorm(doc=266)
          0.030377446 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
            0.030377446 = score(doc=266,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.23214069 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=266)
      0.14285715 = coord(1/7)
    
    Abstract
    In den Handlungsempfehlungen der Koordinierungsstelle für die Erhaltung des schriftlichen Kulturguts (KEK) von 2015 (KEK-Handlungsempfehlungen) wird ein nationaler Standard bei der Dokumentation von Bestandserhaltung gefordert: "In den Bibliothekskatalogen sollten künftig für den verbundübergreifenden Abgleich Bestandserhaltungsmaßnahmen für die Bestände ab 1851 [.] in standardisierter Form dokumentiert und recherchierbar gemacht werden. Dies bedarf einer gemeinsamen Festlegung mit den Bibliotheksverbünden [.]." In den KEK-Handlungsempfehlungen werden auf der Basis einer im Jahr 2015 erfolgten Erhebung für Monografien fast neun Millionen Bände aus dem Zeitabschnitt 1851-1990 als Pflichtexemplare an Bundes- und Ländereinrichtungen angegeben, die akut vom Papierzerfall bedroht und als erste Stufe einer Gesamtstrategie zu entsäuern sind. Ein Ziel der KEK ist es, standardisierte und zertifizierte Verfahren zur Massenentsäuerung zu fördern. Im Metadatenformat sind zunächst fünf Verfahren der Massenentsäuerung in Form von kontrolliertem Vokabular dokumentiert: DEZ, Mg3/MBG, METE, MgO, MMMC[2]. Mit diesen Angaben, die gezielt selektiert werden können, ist mittel- und langfristig die Anwendung einzelner Verfahren der Massenentsäuerung abrufbar und statistisch auswertbar.
    Date
    22. 5.2021 12:43:05
  9. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.01
    0.01263469 = product of:
      0.088442825 = sum of:
        0.088442825 = sum of:
          0.058065377 = weight(_text_:anwendung in 1181) [ClassicSimilarity], result of:
            0.058065377 = score(doc=1181,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.3209477 = fieldWeight in 1181, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.046875 = fieldNorm(doc=1181)
          0.030377446 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
            0.030377446 = score(doc=1181,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.23214069 = fieldWeight in 1181, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1181)
      0.14285715 = coord(1/7)
    
    Date
    26.11.2023 18:59:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  10. Lardera, M.; Hjoerland, B.: Keyword (2021) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 591) [ClassicSimilarity], result of:
          0.081282035 = score(doc=591,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=591)
      0.14285715 = coord(1/7)
    
    Abstract
    This article discusses the different meanings of 'keyword' and related terms such as 'keyphrase', 'descriptor', 'index term', 'subject heading', 'tag' and 'n-gram' and suggests definitions of each of these terms. It further illustrates a classification of keywords, based on how they are produced or who is the actor generating them and present comparison between author-assigned keywords, indexer-assigned keywords and reader-assigned keywords as well as the automatic generation of keywords. The article also considers the functions of keywords including the use of keywords for generating bibliographic indexes. The theoretical view informing the article is that the assignment of a keyword to a text, picture or other document involves an interpretation of the document and an evaluation of the document's potentials for users. This perspective is important for both manually assigned keywords and for automated generation and is opposed to a strong tendency to consider a set of keywords as ideally presenting one best representation of a document for all requests.
  11. Fang, Z.; Dudek, J.; Costas, R.: Facing the volatility of tweets in altmetric research (2022) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 605) [ClassicSimilarity], result of:
          0.081282035 = score(doc=605,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=605)
      0.14285715 = coord(1/7)
    
    Abstract
    The data re-collection for tweets from data snapshots is a common methodological step in Twitter-based research. Understanding better the volatility of tweets over time is important for validating the reliability of metrics based on Twitter data. We tracked a set of 37,918 original scholarly tweets mentioning COVID-19-related research daily for 56 days and captured the reasons for the changes in their availability over time. Results show that the proportion of unavailable tweets increased from 1.6 to 2.6% in the time window observed. Of the 1,323 tweets that became unavailable at some point in the period observed, 30.5% became available again afterwards. "Revived" tweets resulted mainly from the unprotecting, reactivating, or unsuspending of users' accounts. Our findings highlight the importance of noting this dynamic nature of Twitter data in altmetric research and testify to the challenges that this poses for the retrieval, processing, and interpretation of Twitter data about scientific papers.
  12. Sluis, F. van der; Broek, E.L. van den: Feedback beyond accuracy : using eye-tracking to detect comprehensibility and interest during reading (2023) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 880) [ClassicSimilarity], result of:
          0.081282035 = score(doc=880,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 880, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=880)
      0.14285715 = coord(1/7)
    
    Abstract
    Knowing what information a user wants is a paramount challenge to information science and technology. Implicit feedback is key to solving this challenge, as it allows information systems to learn about a user's needs and preferences. The available feedback, however, tends to be limited and its interpretation shows to be difficult. To tackle this challenge, we present a user study that explores whether tracking the eyes can unpack part of the complexity inherent to relevance and relevance decisions. The eye behavior of 30 participants reading 18 news articles was compared with their subjectively appraised comprehensibility and interest at a discourse level. Using linear regression models, the eye-tracking signal explained 49.93% (comprehensibility) and 30.41% (interest) of variance (p < .001). We conclude that eye behavior provides implicit feedback beyond accuracy that enables new forms of adaptation and interaction support for personalized information systems.
  13. Hong, Y.; Zeng, M.L.: International Classification of Diseases (ICD) (2022) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 1112) [ClassicSimilarity], result of:
          0.081282035 = score(doc=1112,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 1112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=1112)
      0.14285715 = coord(1/7)
    
    Abstract
    This article presents the history, contents, structures, functions, and applications of the International Classification of Diseases (ICD), which is a global standard maintained by the World Health Organization (WHO). The article aims to present ICD from the knowledge organization perspective and focuses on the current versions, ICD-10 and ICD-11. It also introduces the relationship between ICD and other health knowledge organization systems (KOSs), plus efforts in research and development reported in health informatics. The article concludes that the high-level effort of promoting a unified classification system such as ICD is critical in providing a common language for systematic recording, reporting, analysis, interpretation, and comparison of mortality and morbidity data. It greatly enhances the constancy of coding across languages, cultures, and healthcare systems around the world.
  14. Wiesenmüller, H.: Verbale Erschließung in Katalogen und Discovery-Systemen : Überlegungen zur Qualität (2021) 0.01
    0.010528908 = product of:
      0.07370235 = sum of:
        0.07370235 = sum of:
          0.048387814 = weight(_text_:anwendung in 374) [ClassicSimilarity], result of:
            0.048387814 = score(doc=374,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.2674564 = fieldWeight in 374, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.0390625 = fieldNorm(doc=374)
          0.02531454 = weight(_text_:22 in 374) [ClassicSimilarity], result of:
            0.02531454 = score(doc=374,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.19345059 = fieldWeight in 374, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=374)
      0.14285715 = coord(1/7)
    
    Abstract
    Beschäftigt man sich mit Inhaltserschließung, so sind zunächst zwei Dimensionen zu unterscheiden - die Wissensorganisationssysteme selbst (z. B. Normdateien, Thesauri, Schlagwortsprachen, Klassifikationen und Ontologien) und die Metadaten für Dokumente, die mit diesen Wissensorganisationssystemen erschlossen sind. Beides steht in einer Wechselwirkung zueinander: Die Wissensorganisationssysteme sind die Werkzeuge für die Erschließungsarbeit und bilden die Grundlage für die Erstellung konkreter Erschließungsmetadaten. Die praktische Anwendung der Wissensorganisationssysteme in der Erschließung wiederum ist die Basis für deren Pflege und Weiterentwicklung. Zugleich haben Wissensorganisationssysteme auch einen Eigenwert unabhängig von den Erschließungsmetadaten für einzelne Dokumente, indem sie bestimmte Bereiche von Welt- oder Fachwissen modellartig abbilden. Will man nun Aussagen über die Qualität von inhaltlicher Erschließung treffen, so genügt es nicht, den Input - also die Wissensorganisationssysteme und die damit generierten Metadaten - zu betrachten. Man muss auch den Output betrachten, also das, was die Recherchewerkzeuge daraus machen und was folglich bei den Nutzer:innen konkret ankommt. Im vorliegenden Beitrag werden Überlegungen zur Qualität von Recherchewerkzeugen in diesem Bereich angestellt - gewissermaßen als Fortsetzung und Vertiefung der dazu im Thesenpapier des Expertenteams RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI) gegebenen Hinweise. Im Zentrum steht die verbale Erschließung nach den Regeln für die Schlagwortkatalogisierung (RSWK), wie sie sich in Bibliothekskatalogen manifestiert - gleich, ob es sich dabei um herkömmliche Kataloge oder um Resource-Discovery-Systeme (RDS) handelt.
    Date
    24. 9.2021 12:22:02
  15. Velios, A.; St.John, K.: Linked conservation data: : the adoption and use of vocabularies in the field of heritage conservation for publishing conservation records as linked data (2021) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 580) [ClassicSimilarity], result of:
          0.067735024 = score(doc=580,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=580)
      0.14285715 = coord(1/7)
    
    Abstract
    One of the fundamental roles of memory organisations is to safe-keep collections and this includes activities around their preservation and conservation. Conservators produce documentation records of their work to assist future interpretation of objects and to explain decision making for conservation. This documentation may exist as structured data or free text and in both cases they require vocabularies that can be understood widely in the domain. This paper describes a survey of conservation professionals which allowed us to compile the vocabularies used in the domain. It includes an analysis of the vocabularies with key findings: a) the overlapping terms with multiple definitions, b) the partial coverage of the domain which is lacking controlled vocabularies for condition types and treatment techniques and c) the limited formats in which vocabularies are published, making them difficult to use within Linked Data implementations. The paper then describes an approach to improve the vocabulary landscape in conservation by providing guidelines for encoding and aligning vocabularies as well as considering third party platforms for sharing vocabularies in a sustainable way. The paper concludes with a summary of our findings and recommendations.
  16. Hjoerland, B.: Science, Part I : basic conceptions of science and the scientific method (2021) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 594) [ClassicSimilarity], result of:
          0.067735024 = score(doc=594,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 594, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=594)
      0.14285715 = coord(1/7)
    
    Abstract
    This article is the first in a trilogy about the concept "science". Section 1 considers the historical development of the meaning of the term science and shows its close relation to the terms "knowl­edge" and "philosophy". Section 2 presents four historic phases in the basic conceptualizations of science (1) science as representing absolute certain of knowl­edge based on deductive proof; (2) science as representing absolute certain of knowl­edge based on "the scientific method"; (3) science as representing fallible knowl­edge based on "the scientific method"; (4) science without a belief in "the scientific method" as constitutive, hence the question about the nature of science becomes dramatic. Section 3 presents four basic understandings of the scientific method: Rationalism, which gives priority to a priori thinking; empiricism, which gives priority to the collection, description, and processing of data in a neutral way; historicism, which gives priority to the interpretation of data in the light of "paradigm" and pragmatism, which emphasizes the analysis of the purposes, consequences, and the interests of knowl­edge. The second article in the trilogy focus on different fields studying science, while the final article presets further developments in the concept of science and the general conclusion. Overall, the trilogy illuminates the most important tensions in different conceptualizations of science and argues for the role of information science and knowl­edge organization in the study of science and suggests how "science" should be understood as an object of research in these fields.
  17. Jetter, H.-C.: Informationsvisualisierung und Visual Analytics (2023) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 791) [ClassicSimilarity], result of:
          0.067735024 = score(doc=791,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 791, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=791)
      0.14285715 = coord(1/7)
    
    Abstract
    Die Visualisierung digitaler Datenbestände mit dem Computer ist heute alltäglich geworden. Spätestens seit der COVID-19-Pandemie sind computergenerierte Datenvisualisierungen und deren Interpretation durch den Menschen nicht mehr nur Expert*innen für Statistik und Datenanalyse vorbehalten. Stattdessen sind interaktive Visualisierungen zur Darstellung von Trends, Mustern oder Vergleichen in Daten zu festen Bestandteilen unseres medialen Alltags geworden, ob im (Daten-)Journalismus, in den sozialen Medien oder bei der Kommunikation von Behörden mit der Bevölkerung. Wie bereits von Reiterer und Jetter (2013) in einer früheren Auflage dieses Beitrags thematisiert wurde, bietet dieser Trend zur interaktiven und narrativen Visualisierung in den Massenmedien den Benutzer*innen neue Möglichkeiten des datenbasierten Erkenntnisgewinns. Seitdem popularisiert zusätzlich die Vielzahl verfügbarer "Tracker"-Apps mit dem Ziel der Verhaltensoptimierung (z. B. im Bereich Fitness oder Energiekonsum) die interaktive Visualisierung und Analyse persönlicher und privater Daten. Auch im beruflichen Alltag haben sich einstige Nischenwerkzeuge, wie z. B. die Visualisierungssoftware Tableau, in äußerst populäre Anwendungen verwandelt und sind zum Gegenstand zweistelliger Milliardeninvestitionen geworden, insbesondere für die Visualisierung und Analyse von Geschäftsdaten. Im Lichte dieser Entwicklungen soll dieser Beitrag daher im Folgenden einerseits grundlegende Begriffe und Konzepte der Informationsvisualisierung vermitteln, andererseits auch Alltagsformen und Zukunftstrends wie Visual Analytics thematisieren.
  18. Petrovich, E.: Science mapping and science maps (2021) 0.01
    0.0077411463 = product of:
      0.05418802 = sum of:
        0.05418802 = weight(_text_:interpretation in 595) [ClassicSimilarity], result of:
          0.05418802 = score(doc=595,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.25315216 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=595)
      0.14285715 = coord(1/7)
    
    Abstract
    Science maps are visual representations of the structure and dynamics of scholarly knowl­edge. They aim to show how fields, disciplines, journals, scientists, publications, and scientific terms relate to each other. Science mapping is the body of methods and techniques that have been developed for generating science maps. This entry is an introduction to science maps and science mapping. It focuses on the conceptual, theoretical, and methodological issues of science mapping, rather than on the mathematical formulation of science mapping techniques. After a brief history of science mapping, we describe the general procedure for building a science map, presenting the data sources and the methods to select, clean, and pre-process the data. Next, we examine in detail how the most common types of science maps, namely the citation-based and the term-based, are generated. Both are based on networks: the former on the network of publications connected by citations, the latter on the network of terms co-occurring in publications. We review the rationale behind these mapping approaches, as well as the techniques and methods to build the maps (from the extraction of the network to the visualization and enrichment of the map). We also present less-common types of science maps, including co-authorship networks, interlocking editorship networks, maps based on patents' data, and geographic maps of science. Moreover, we consider how time can be represented in science maps to investigate the dynamics of science. We also discuss some epistemological and sociological topics that can help in the interpretation, contextualization, and assessment of science maps. Then, we present some possible applications of science maps in science policy. In the conclusion, we point out why science mapping may be interesting for all the branches of meta-science, from knowl­edge organization to epistemology.
  19. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.01
    0.0070655947 = product of:
      0.04945916 = sum of:
        0.04945916 = product of:
          0.14837748 = sum of:
            0.14837748 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.14837748 = score(doc=5669,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  20. Laczny, J.: Fit for Purpose : Standardisierung von inhaltserschließenden Informationen durch Richtlinien für Metadaten (2021) 0.01
    0.005530036 = product of:
      0.03871025 = sum of:
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 363) [ClassicSimilarity], result of:
              0.0774205 = score(doc=363,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 363, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=363)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Im Folgenden soll der Frage nachgegangen werden, inwiefern Bibliotheken den Qualitätsanspruch an inhaltserschließende Informationen von Ressourcen durch die Formulierung und Veröffentlichung einer bibliotheksspezifischen, übergeordneten Metadaten-Richtlinie bzw. -Policy - auch im Sinne einer Transparenzoffensive - und deren Anwendung beeinflussen können.

Languages

  • e 87
  • d 47
  • pt 2
  • More… Less…

Types

  • a 127
  • el 28
  • m 2
  • p 2
  • x 2
  • More… Less…