Search (3 results, page 1 of 1)

  • × year_i:[2020 TO 2030}
  • × theme_ss:"Informationsethik"
  1. Rubel, A.; Castro, C.; Pham, A.: Algorithms and autonomy : the ethics of automated decision systems (2021) 0.02
    0.016672583 = product of:
      0.033345167 = sum of:
        0.033345167 = product of:
          0.06669033 = sum of:
            0.06669033 = weight(_text_:systems in 671) [ClassicSimilarity], result of:
              0.06669033 = score(doc=671,freq=12.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.41585106 = fieldWeight in 671, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=671)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work... the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens' participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core
    LCSH
    Decision support systems / Moral and ethical aspects
    Expert systems (Computer science) / Moral and ethical aspects
    Subject
    Decision support systems / Moral and ethical aspects
    Expert systems (Computer science) / Moral and ethical aspects
  2. Tran, Q.-T.: Standardization and the neglect of museum objects : an infrastructure-based approach for inclusive integration of cultural artifacts (2023) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 1136) [ClassicSimilarity], result of:
              0.038503684 = score(doc=1136,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 1136, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1136)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper examines the integration of born-digital and digitized content into an outdated classification system within the Museum of European Cultures in Berlin. It underscores the predicament encountered by smaller to medium-sized cultural institutions as they navigate between adhering to established knowl­edge management systems and preserving an expanding array of contemporary cultural artifacts. The perspective of infrastructure studies is employed to scrutinize the representation of diverse viewpoints and voices within the museum's collections. The study delves into museum personnel's challenges in cataloging and classifying ethnographic objects utilizing a numerical-alphabetical categorization scheme from the 1930s. It presents an analysis of the limitations inherent in this method, along with its implications for the assimilation of emerging forms of born-digital and digitized objects. Through an exploration of the case of category 74, as observed at the Museum of European Cultures, the study illustrates the complexities of replacing pre-existing systems due to their intricate integration into the socio-technical components of the museum's information infrastructure. The paper reflects on how resource-constrained cultural institutions can take a proactive and ethical approach to knowl­edge management, re-evaluating their knowl­edge infrastructure to promote inclusion and ensure adaptability.
  3. Slota, S.C.; Fleischmann, K.R.; Greenberg, S.; Verma, N.; Cummings, B.; Li, L.; Shenefiel, C.: Locating the work of artificial intelligence ethics (2023) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 899) [ClassicSimilarity], result of:
              0.03267146 = score(doc=899,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 899, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=899)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The scale and complexity of the data and algorithms used in artificial intelligence (AI)-based systems present significant challenges for anticipating their ethical, legal, and policy implications. Given these challenges, who does the work of AI ethics, and how do they do it? This study reports findings from interviews with 26 stakeholders in AI research, law, and policy. The primary themes are that the work of AI ethics is structured by personal values and professional commitments, and that it involves situated meaning-making through data and algorithms. Given the stakes involved, it is not enough to simply satisfy that AI will not behave unethically; rather, the work of AI ethics needs to be incentivized.