Search (2 results, page 1 of 1)

  • × author_ss:"Fleischmann, K.R."
  • × language_ss:"e"
  • × theme_ss:"Informationsethik"
  1. Fleischmann, K.R.; Hui, C.; Wallace, W.A.: ¬The societal responsibilities of computational modelers : human values and professional codes of ethics (2017) 0.00
    0.003986755 = product of:
      0.01594702 = sum of:
        0.01594702 = weight(_text_:for in 3424) [ClassicSimilarity], result of:
          0.01594702 = score(doc=3424,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 3424, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3424)
      0.25 = coord(1/4)
    
    Abstract
    Information and communication technology (ICT) has increasingly important implications for our everyday lives, with the potential to both solve existing social problems and create new ones. This article focuses on one particular group of ICT professionals, computational modelers, and explores how these ICT professionals perceive their own societal responsibilities. Specifically, the article uses a mixed-method approach to look at the role of professional codes of ethics and explores the relationship between modelers' experiences with, and attitudes toward, codes of ethics and their values. Statistical analysis of survey data reveals a relationship between modelers' values and their attitudes and experiences related to codes of ethics. Thematic analysis of interviews with a subset of survey participants identifies two key themes: that modelers should be faithful to the reality and values of users and that codes of ethics should be built from the bottom up. One important implication of the research is that those who value universalism and benevolence may have a particular duty to act on their values and advocate for, and work to develop, a code of ethics.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.543-552
  2. Slota, S.C.; Fleischmann, K.R.; Greenberg, S.; Verma, N.; Cummings, B.; Li, L.; Shenefiel, C.: Locating the work of artificial intelligence ethics (2023) 0.00
    0.0039062058 = product of:
      0.015624823 = sum of:
        0.015624823 = weight(_text_:for in 899) [ClassicSimilarity], result of:
          0.015624823 = score(doc=899,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 899, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=899)
      0.25 = coord(1/4)
    
    Abstract
    The scale and complexity of the data and algorithms used in artificial intelligence (AI)-based systems present significant challenges for anticipating their ethical, legal, and policy implications. Given these challenges, who does the work of AI ethics, and how do they do it? This study reports findings from interviews with 26 stakeholders in AI research, law, and policy. The primary themes are that the work of AI ethics is structured by personal values and professional commitments, and that it involves situated meaning-making through data and algorithms. Given the stakes involved, it is not enough to simply satisfy that AI will not behave unethically; rather, the work of AI ethics needs to be incentivized.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.3, S.311-322