Search (7 results, page 1 of 1)

  • × theme_ss:"Informationsethik"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Helbing, D.; Frey, B.S.; Gigerenzer, G.; Hafen, E.; Hagner, M.; Hofstetter, Y.; Hoven, J. van den; Zicari, R.V.; Zwitter, A.: Digitale Demokratie statt Datendiktatur : Digital-Manifest (2016) 0.00
    0.0038346653 = product of:
      0.034511987 = sum of:
        0.034511987 = weight(_text_:data in 5600) [ClassicSimilarity], result of:
          0.034511987 = score(doc=5600,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.29644224 = fieldWeight in 5600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5600)
      0.11111111 = coord(1/9)
    
    Abstract
    Big Data, Nudging, Verhaltenssteuerung: Droht uns die Automatisierung der Gesellschaft durch Algorithmen und künstliche Intelligenz? Ein Appell zur Sicherung von Freiheit und Demokratie.
    Content
    Neun internationale Experten warnen vor der Aushöhlung unserer Bürgerrechte und der Demokratie im Zuge der digitalen Technikrevolution. Wir steuern demnach geradewegs auf die Automatisierung unserer Gesellschaft und die Fernsteuerung ihrer Bürger durch Algorithmen zu, in denen sich »Big Data« und »Nudging«-Methoden zu einem mächtigen Instrument vereinen. Erste Ansätze dazu lassen sich bereits in China und Singapur beobachten. Ein Zehnpunkteplan soll helfen, jetzt die richtigen Weichen zu stellen, um auch im digitalen Zeitalter Freiheitsrechte und Demokratie zu bewahren und die sich ergebenden Chancen zu nutzen. Vgl. auch das Interview mit D. Helbing zur Resonanz unter: http://www.spektrum.de/news/wie-social-bots-den-brexit-verursachten/1423912. Vgl. auch: https://www.spektrum.de/kolumne/das-grosse-scheitern/1685328.
  2. Rockembach, M.; Malheiro da Silva, A.: Epistemology and ethics of big data (2018) 0.00
    0.003615357 = product of:
      0.032538213 = sum of:
        0.032538213 = weight(_text_:data in 4848) [ClassicSimilarity], result of:
          0.032538213 = score(doc=4848,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2794884 = fieldWeight in 4848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4848)
      0.11111111 = coord(1/9)
    
  3. Fleischmann, K.R.; Hui, C.; Wallace, W.A.: ¬The societal responsibilities of computational modelers : human values and professional codes of ethics (2017) 0.00
    0.0022595983 = product of:
      0.020336384 = sum of:
        0.020336384 = weight(_text_:data in 3424) [ClassicSimilarity], result of:
          0.020336384 = score(doc=3424,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.17468026 = fieldWeight in 3424, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3424)
      0.11111111 = coord(1/9)
    
    Abstract
    Information and communication technology (ICT) has increasingly important implications for our everyday lives, with the potential to both solve existing social problems and create new ones. This article focuses on one particular group of ICT professionals, computational modelers, and explores how these ICT professionals perceive their own societal responsibilities. Specifically, the article uses a mixed-method approach to look at the role of professional codes of ethics and explores the relationship between modelers' experiences with, and attitudes toward, codes of ethics and their values. Statistical analysis of survey data reveals a relationship between modelers' values and their attitudes and experiences related to codes of ethics. Thematic analysis of interviews with a subset of survey participants identifies two key themes: that modelers should be faithful to the reality and values of users and that codes of ethics should be built from the bottom up. One important implication of the research is that those who value universalism and benevolence may have a particular duty to act on their values and advocate for, and work to develop, a code of ethics.
  4. Brandt, M.B.: Ethical aspects in the organization of legislative lnformation (2018) 0.00
    0.0022595983 = product of:
      0.020336384 = sum of:
        0.020336384 = weight(_text_:data in 4149) [ClassicSimilarity], result of:
          0.020336384 = score(doc=4149,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.17468026 = fieldWeight in 4149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4149)
      0.11111111 = coord(1/9)
    
    Abstract
    The goal of this research is to analyze ethical questions related to the organization of legislative information (bills, laws, and speeches) within the scope of the Brazilian Federal legislature (Chamber of Deputies and Federal Senate). Field research including interviews was used to collect data in order to investigate the development of knowledge representation tools, such as thesauri and taxonomies , and subject indexing for organization of legislative information (bills, legislation, and speeches). The heads of all sectors responsible for the chosen activities were interviewed in person, and the answers were compared to common ethical problems described in knowledge organization (KO) literature. The results, in part, show a lack of clarity on ethical issues in the treatment of legislative information, pointing to ethical dilemmas and identifying problems such as informational directness, misrepresentation, and ambiguity, among others. The indexers in the Brazilian Congress found ambiguity the ethical aspect faced most often in their jobs. The next most frequent issue was professional inefficiency and in third place was a tie between informational directness and lack of cultural warrant. The research also describes solutions used for various ethical dilemmas. It was found that some indexing terms used to describe bills in the Brazilian Chamber of Deputies have been subject to censorship and censored, or censurable, indexing terms have to be hidden in metadata so documents can be retrieved by users. It concludes that a greater ethical awareness of technical aspects is needed for Brazilian Federal legislative information professionals.
  5. Broughton, V.: ¬The respective roles of intellectual creativity and automation in representing diversity : human and machine generated bias (2019) 0.00
    0.0022595983 = product of:
      0.020336384 = sum of:
        0.020336384 = weight(_text_:data in 5728) [ClassicSimilarity], result of:
          0.020336384 = score(doc=5728,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.17468026 = fieldWeight in 5728, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5728)
      0.11111111 = coord(1/9)
    
    Abstract
    The paper traces the development of the discussion around ethical issues in artificial intelligence, and considers the way in which humans have affected the knowledge bases used in machine learning. The phenomenon of bias or discrimination in machine ethics is seen as inherited from humans, either through the use of biased data or through the semantics inherent in intellectually- built tools sourced by intelligent agents. The kind of biases observed in AI are compared with those identified in the field of knowledge organization, using religious adherents as an example of a community potentially marginalized by bias. A practical demonstration is given of apparent religious prejudice inherited from source material in a large database deployed widely in computational linguistics and automatic indexing. Methods to address the problem of bias are discussed, including the modelling of the moral process on neuroscientific understanding of brain function. The question is posed whether it is possible to model religious belief in a similar way, so that robots of the future may have both an ethical and a religious sense and themselves address the problem of prejudice.
  6. Helbing, D.: ¬Das große Scheitern (2019) 0.00
    0.0022170406 = product of:
      0.019953365 = sum of:
        0.019953365 = product of:
          0.03990673 = sum of:
            0.03990673 = weight(_text_:22 in 5599) [ClassicSimilarity], result of:
              0.03990673 = score(doc=5599,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.30952093 = fieldWeight in 5599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5599)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    25.12.2019 14:19:22
  7. Homan, P.A.: Library catalog notes for "bad books" : ethics vs. responsibilities (2012) 0.00
    0.0013856502 = product of:
      0.012470853 = sum of:
        0.012470853 = product of:
          0.024941705 = sum of:
            0.024941705 = weight(_text_:22 in 420) [ClassicSimilarity], result of:
              0.024941705 = score(doc=420,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.19345059 = fieldWeight in 420, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=420)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    27. 9.2012 14:22:00