Search (3 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Informationsethik"
  • × year_i:[2020 TO 2030}
  1. Slota, S.C.; Fleischmann, K.R.; Greenberg, S.; Verma, N.; Cummings, B.; Li, L.; Shenefiel, C.: Locating the work of artificial intelligence ethics (2023) 0.01
    0.010039202 = product of:
      0.050196007 = sum of:
        0.050196007 = weight(_text_:it in 899) [ClassicSimilarity], result of:
          0.050196007 = score(doc=899,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.33208904 = fieldWeight in 899, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=899)
      0.2 = coord(1/5)
    
    Abstract
    The scale and complexity of the data and algorithms used in artificial intelligence (AI)-based systems present significant challenges for anticipating their ethical, legal, and policy implications. Given these challenges, who does the work of AI ethics, and how do they do it? This study reports findings from interviews with 26 stakeholders in AI research, law, and policy. The primary themes are that the work of AI ethics is structured by personal values and professional commitments, and that it involves situated meaning-making through data and algorithms. Given the stakes involved, it is not enough to simply satisfy that AI will not behave unethically; rather, the work of AI ethics needs to be incentivized.
  2. Tran, Q.-T.: Standardization and the neglect of museum objects : an infrastructure-based approach for inclusive integration of cultural artifacts (2023) 0.01
    0.006830811 = product of:
      0.034154054 = sum of:
        0.034154054 = weight(_text_:it in 1136) [ClassicSimilarity], result of:
          0.034154054 = score(doc=1136,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22595796 = fieldWeight in 1136, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1136)
      0.2 = coord(1/5)
    
    Abstract
    The paper examines the integration of born-digital and digitized content into an outdated classification system within the Museum of European Cultures in Berlin. It underscores the predicament encountered by smaller to medium-sized cultural institutions as they navigate between adhering to established knowl­edge management systems and preserving an expanding array of contemporary cultural artifacts. The perspective of infrastructure studies is employed to scrutinize the representation of diverse viewpoints and voices within the museum's collections. The study delves into museum personnel's challenges in cataloging and classifying ethnographic objects utilizing a numerical-alphabetical categorization scheme from the 1930s. It presents an analysis of the limitations inherent in this method, along with its implications for the assimilation of emerging forms of born-digital and digitized objects. Through an exploration of the case of category 74, as observed at the Museum of European Cultures, the study illustrates the complexities of replacing pre-existing systems due to their intricate integration into the socio-technical components of the museum's information infrastructure. The paper reflects on how resource-constrained cultural institutions can take a proactive and ethical approach to knowl­edge management, re-evaluating their knowl­edge infrastructure to promote inclusion and ensure adaptability.
  3. Bagatini, J.A.; Chaves Guimarães, J.A.: Algorithmic discriminations and their ethical impacts on knowledge organization : a thematic domain-analysis (2023) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 1134) [ClassicSimilarity], result of:
          0.024150565 = score(doc=1134,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 1134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1134)
      0.2 = coord(1/5)
    
    Abstract
    Personal data play a fundamental role in contemporary socioeconomic dynamics, with one of its primary aspects being the potential to facilitate discriminatory situations. This situation impacts the knowledge organization field especially because it considers personal data as elements (facets) to categorize persons under an economic and sometimes discriminatory perspective. The research corpus was collected at Scopus and Web of Science until the end of 2021, under the terms "data discrimination", "algorithmic bias", "algorithmic discrimination" and "fair algorithms". The obtained results allowed to infer that the analyzed knowledge domain predominantly incorporates personal data, whether in its behavioral dimension or in the scope of the so-called sensitive data. These data are susceptible to the action of algorithms of different orders, such as relevance, filtering, predictive, social ranking, content recommendation and random classification. Such algorithms can have discriminatory biases in their programming related to gender, sexual orientation, race, nationality, religion, age, social class, socioeconomic profile, physical appearance, and political positioning.