Search (14 results, page 1 of 1)

  • × year_i:[2020 TO 2030}
  • × theme_ss:"Informationsethik"
  1. Rösch, H.: Informationsethik (2023) 0.02
    0.01761418 = product of:
      0.05284254 = sum of:
        0.011805649 = weight(_text_:in in 821) [ClassicSimilarity], result of:
          0.011805649 = score(doc=821,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19881277 = fieldWeight in 821, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=821)
        0.04103689 = weight(_text_:und in 821) [ClassicSimilarity], result of:
          0.04103689 = score(doc=821,freq=24.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.42413816 = fieldWeight in 821, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=821)
      0.33333334 = coord(2/6)
    
    Abstract
    Der Terminus Informationsethik (information ethics) wurde Ende der 1980er Jahre im bibliothekarischen Umfeld geprägt und tauchte etwa zeitgleich in den USA und Deutschland auf. Informationsethik umfasst alle ethisch relevanten Fragen, die im Zusammenhang mit Produktion, Speicherung, Erschließung, Verteilung und Nutzung von Informationen auftreten. Informationsethik gehört zu den angewandten oder Bereichsethiken, die sich in den vergangenen Jahrzehnten in großer Zahl gebildet haben. Dazu zählen etwa Wirtschaftsethik, Medizinethik, Technikethik, Computerethik oder Medienethik. Zu beobachten ist ein Trend zu immer spezifischeren Bereichsethiken wie z. B. der Lebensmittelethik oder der Algorithmenethik. Aufteilung und Abgrenzung der Bereichsethiken folgen keinem einheitlichen Prinzip. Daher schwanken ihre Anzahl und ihre Benennungen in der Fachliteratur erheblich. Bereichsethiken überlappen sich z. T. oder stehen bisweilen in einem komplementären Verhältnis. So hat die Informationsethik ohne Zweifel u. a. Bezüge zur Medienethik, zur Technikethik (Computerethik), zur Wirtschaftsethik, zur Wissenschaftsethik und natürlich zur Sozialethik. Im Unterschied zur Allgemeinen Ethik, die sich mit übergreifenden, allgemeinen Aspekten wie Freiheit, Gerechtigkeit oder Wahrhaftigkeit auseinandersetzt, übertragen angewandte Ethiken zum einen allgemeine ethische Prinzipien und Methoden auf bestimmte Lebensbereiche und Handlungsfelder. Zum anderen arbeiten sie spezifische Fragestellungen und Probleme heraus, die charakteristisch für den jeweiligen Bereich sind und die in der Allgemeinen Ethik keine Berücksichtigung finden. Angewandte Ethiken sind grundsätzlich praxisorientiert. Sie zielen darauf, die Akteure der jeweiligen Handlungsfelder für ethische Fragestellungen zu sensibilisieren und das Bewusstsein um eine gemeinsame Wertebasis, die idealerweise in einem Ethikkodex dokumentiert ist, zu stabilisieren.
    Source
    Grundlagen der Informationswissenschaft. Hrsg.: Rainer Kuhlen, Dirk Lewandowski, Wolfgang Semar und Christa Womser-Hacker. 7., völlig neu gefasste Ausg
  2. Huber, W.: Menschen, Götter und Maschinen : eine Ethik der Digitalisierung (2022) 0.02
    0.016507369 = product of:
      0.049522102 = sum of:
        0.0071393843 = weight(_text_:in in 752) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=752,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 752, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.042382717 = weight(_text_:und in 752) [ClassicSimilarity], result of:
          0.042382717 = score(doc=752,freq=40.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.438048 = fieldWeight in 752, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
      0.33333334 = coord(2/6)
    
    Abstract
    Die Digitalisierung hat unsere Privatsphäre ausgehöhlt, die Öffentlichkeit in antagonistische Teilöffentlichkeiten zerlegt, Hemmschwellen gesenkt und die Grenze zwischen Wahrheit und Lüge aufgeweicht. Wolfgang Huber beschreibt klar und pointiert diese technische und soziale Entwicklung. Er zeigt, wie sich konsensfähige ethische Prinzipien für den Umgang mit digitaler Intelligenz finden lassen und umgesetzt werden können - von der Gesetzgebung, von digitalen Anbietern und von allen Nutzern. Die Haltungen zur Digitalisierung schwanken zwischen Euphorie und Apokalypse: Die einen erwarten die Schaffung eines neuen Menschen, der sich selbst zum Gott macht. Andere befürchten den Verlust von Freiheit und Menschenwürde. Wolfgang Huber wirft demgegenüber einen realistischen Blick auf den technischen Umbruch. Das beginnt bei der Sprache: Sind die "sozialen Medien" wirklich sozial? Fährt ein mit digitaler Intelligenz ausgestattetes Auto "autonom" oder nicht eher automatisiert? Sind Algorithmen, die durch Mustererkennung lernen, deshalb "intelligent"? Eine überbordende Sprache lässt uns allzu oft vergessen, dass noch so leistungsstarke Rechner nur Maschinen sind, die von Menschen entwickelt und bedient werden. Notfalls muss man ihnen den Stecker ziehen. Das wunderbar anschaulich geschriebene Buch macht auf der Höhe der aktuellen ethischen Diskussionen bewusst, dass wir uns der Digitalisierung nicht ausliefern dürfen, sondern sie selbstbestimmt und verantwortlich gestalten können. 80. Geburtstag von Wolfgang Huber am 12.8.2022 Ein Heilmittel gegen allzu euphorische und apokalyptische Erwartungen an die Digitalisierung Wie wir unsere Haltung zur Digitalisierung ändern können, um uns nicht der Technik auszuliefern.
    Classification
    MS 4850: Industrie (allgemeines) und Technik (Automatisierung), Technologie (Allgemeines)
    Content
    Vorwort -- 1. Das digitale Zeitalter -- Zeitenwende -- Die Vorherrschaft des Buchdrucks geht zu Ende -- Wann beginnt das digitale Zeitalter? -- 2. Zwischen Euphorie und Apokalypse -- Digitalisierung. Einfach. Machen -- Euphorie -- Apokalypse -- Verantwortungsethik -- Der Mensch als Subjekt der Ethik -- Verantwortung als Prinzip -- 3. Digitalisierter Alltag in einer globalisierten Welt -- Vom World Wide Web zum Internet der Dinge -- Mobiles Internet und digitale Bildung -- Digitale Plattformen und ihre Strategien -- Big Data und informationelle Selbstbestimmung -- 4. Grenzüberschreitungen -- Die Erosion des Privaten -- Die Deformation des Öffentlichen -- Die Senkung von Hemmschwellen -- Das Verschwinden der Wirklichkeit -- Die Wahrheit in der Infosphäre -- 5. Die Zukunft der Arbeit -- Industrielle Revolutionen -- Arbeit 4.0 -- Ethik 4.0 -- 6. Digitale Intelligenz -- Können Computer dichten? -- Stärker als der Mensch? -- Maschinelles Lernen -- Ein bleibender Unterschied -- Ethische Prinzipien für den Umgang mit digitaler Intelligenz -- Medizin als Beispiel -- 7. Die Würde des Menschen im digitalen Zeitalter -- Kränkungen oder Revolutionen -- Transhumanismus und Posthumanismus -- Gibt es Empathie ohne Menschen? -- Wer ist autonom: Mensch oder Maschine? -- Humanismus der Verantwortung -- 8. Die Zukunft des Homo sapiens -- Vergöttlichung des Menschen -- Homo deus -- Gott und Mensch im digitalen Zeitalter -- Veränderung der Menschheit -- Literatur -- Personenregister.
    Footnote
    Rez. in Spektrum der Wissenschaft. 2022, H.11, S.86 (M. Springer).
    RVK
    MS 4850: Industrie (allgemeines) und Technik (Automatisierung), Technologie (Allgemeines)
  3. Rösch, H.: Informationsethik und Bibliotheksethik : Grundlagen und Praxis (2021) 0.01
    0.00854251 = product of:
      0.05125506 = sum of:
        0.05125506 = weight(_text_:und in 222) [ClassicSimilarity], result of:
          0.05125506 = score(doc=222,freq=26.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.5297484 = fieldWeight in 222, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=222)
      0.16666667 = coord(1/6)
    
    Abstract
    Neben den theoretischen und allgemeinen Grundlagen von Informationsethik und Bibliotheksethik wird das Spektrum ethischer Konflikte und Dilemmata an Beispielen aus der Praxis des Berufsfelds Bibliothek und Information konkret erläutert. Dabei wird deutlich, dass wissenschaftlich fundierte Aussagen der Informationsethik und der Bibliotheksethik grundlegend für die wertbezogene Standardisierung bibliothekarischer Arbeit und äußerst hilfreich für ethisch abgesicherte Entscheidungen im Berufsalltag sind.
    Classification
    AN 65100: Begriff, Wesen der Bibliothek / Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft
    AN 92650: Darstellungen zu mehreren Gebieten / Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft
    RVK
    AN 65100: Begriff, Wesen der Bibliothek / Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft
    AN 92650: Darstellungen zu mehreren Gebieten / Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft
    Series
    Bibliotheks- und Informationspraxis; 68
  4. Brito, M. de: Social affects engineering and ethics (2023) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 1135) [ClassicSimilarity], result of:
          0.011973113 = score(doc=1135,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 1135, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1135)
      0.16666667 = coord(1/6)
    
    Abstract
    This text proposes a multidisciplinary reflection on the subject of ethics, based on philosophical approaches, using Spinoza's work, Ethics, as a foundation. The power of Spinoza's geometric reasoning and deterministic logic, compatible with formal grammars and programming languages, provides a favorable framework for this purpose. In an information society characterized by an abundance of data and a diversity of perspectives, complex thinking is an essential tool for developing an ethical construct that can deal with the uncertainty and contradictions in the field. Acknowledging the natural complexity of ethics in interpersonal relationships, the use of AI techniques appears unavoidable. Artificial intelligence in KOS offers the potential for processing complex questions through the formal modeling of concepts in ethical discourse. By formalizing problems, we hope to unleash the potential of ethical analysis; by addressing complexity analysis, we propose a mechanism for understanding problems and empowering solutions.
  5. Lor, P.; Wiles, B.; Britz, J.: Re-thinking information ethics : truth, conspiracy theories, and librarians in the COVID-19 era (2021) 0.00
    0.0019676082 = product of:
      0.011805649 = sum of:
        0.011805649 = weight(_text_:in in 404) [ClassicSimilarity], result of:
          0.011805649 = score(doc=404,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19881277 = fieldWeight in 404, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=404)
      0.16666667 = coord(1/6)
    
    Abstract
    The COVID-19 pandemic is an international public health crisis without precedent in the last century. The novelty and rapid spread of the virus have added a new urgency to the availability and distribution of reliable information to help curb its fatal potential. As seasoned and trusted purveyors of reliable public information, librarians have attempted to respond to the "infodemic" of fake news, disinformation, and propaganda with a variety of strategies, but the COVID-19 pandemic presents a unique challenge because of the deadly stakes involved. The seriousness of the current situation requires that librarians and associated professionals re-evaluate the ethical basis of their approach to information provision to counter the growing prominence of conspiracy theories in the public sphere and official decision making. This paper analyzes the conspiracy mindset and specific COVID-19 conspiracy theories in discussing how libraries might address the problems of truth and untruth in ethically sound ways. As a contribution to the re-evaluation we propose, the paper presents an ethical framework based on alethic rights-or rights to truth-as conceived by Italian philosopher Franca D'Agostini and how these might inform professional approaches that support personal safety, open knowledge, and social justice.
    Footnote
    Vgl. auch: Froehlich, T.: Some thoughts evoked by Peter Lor, Bradley Wiles, and Johannes Britz, "Re-thinking information ethics: truth, conspiracy theories, and librarians in the COVID-19 era in: Libri. 71(2021) no.3, S.219-225.
  6. Bawden, D.; Robinson, L.: ¬"The dearest of our possessions" : applying Floridi's information privacy concept in models of information behavior and information literacy (2020) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 5939) [ClassicSimilarity], result of:
          0.010709076 = score(doc=5939,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 5939, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5939)
      0.16666667 = coord(1/6)
    
    Abstract
    This conceptual article argues for the value of an approach to privacy in the digital information environment informed by Luciano Floridi's philosophy of information and information ethics. This approach involves achieving informational privacy, through the features of anonymity and obscurity, through an optimal balance of ontological frictions. This approach may be used to modify models for information behavior and for information literacy, giving them a fuller and more effective coverage of privacy issues in the infosphere. For information behavior, the Information Seeking and Communication Model and the Information Grounds conception are most appropriate for this purpose. For information literacy, the metaliteracy model, using a modification a privacy literacy framework, is most suitable.
    Series
    Special issue: Information privacy in the digital age
  7. Martin, J.M.: Records, responsibility, and power : an overview of cataloging ethics (2021) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 708) [ClassicSimilarity], result of:
          0.010709076 = score(doc=708,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 708, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=708)
      0.16666667 = coord(1/6)
    
    Abstract
    Ethics are principles which provide a framework for making decisions that best reflect a set of values. Cataloging carries power, so ethical decision-making is crucial. Because cataloging requires decision-making in areas that differ from other library work, cataloging ethics are a distinct subset of library ethics. Cataloging ethics draw on the primary values of serving the needs of users and providing access to materials. Cataloging ethics are not new, but they have received increased attention since the 1970s. Major current issues in cataloging ethics include the creation of a code of ethics; ongoing debate on the appropriate role of neutrality in cataloging misleading materials and in subject heading lists and classification schemes; how and to what degree considerations of privacy and self-determination should shape authority work; and whether or not our current cataloging codes are sufficiently user-focused.
  8. San Segundo, R.; Martínez-Ávila, D.; Frías Montoya, J.A.: Ethical issues in control by algorithms : the user is the content (2023) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 1132) [ClassicSimilarity], result of:
          0.010709076 = score(doc=1132,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 1132, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1132)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper we discuss some ethical issues and challenges of the use of algorithms on the web from the perspective of knowledge organization. We review some of the problems that these algorithms and the filter bubbles pose for the users. We contextualize these issues within the user-based approaches to knowledge organization in a larger sense. We review some of the technologies that have been developed to counter these problems as well as initiatives from the knowledge organization field. We conclude with the necessity of adopting a critical and ethical stance towards the use of algorithms on the web and the need for an education in knowledge organization that addresses these issues.
  9. Rubel, A.; Castro, C.; Pham, A.: Algorithms and autonomy : the ethics of automated decision systems (2021) 0.00
    0.0016629322 = product of:
      0.009977593 = sum of:
        0.009977593 = weight(_text_:in in 671) [ClassicSimilarity], result of:
          0.009977593 = score(doc=671,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.16802745 = fieldWeight in 671, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=671)
      0.16666667 = coord(1/6)
    
    Abstract
    Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work... the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens' participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core
    Footnote
    Rez. in: JASIST 73(2022) no.10, S.1506-1509 (Madelyn Rose Sanfilippo).
  10. Bagatini, J.A.; Chaves Guimarães, J.A.: Algorithmic discriminations and their ethical impacts on knowledge organization : a thematic domain-analysis (2023) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 1134) [ClassicSimilarity], result of:
          0.008924231 = score(doc=1134,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 1134, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1134)
      0.16666667 = coord(1/6)
    
    Abstract
    Personal data play a fundamental role in contemporary socioeconomic dynamics, with one of its primary aspects being the potential to facilitate discriminatory situations. This situation impacts the knowledge organization field especially because it considers personal data as elements (facets) to categorize persons under an economic and sometimes discriminatory perspective. The research corpus was collected at Scopus and Web of Science until the end of 2021, under the terms "data discrimination", "algorithmic bias", "algorithmic discrimination" and "fair algorithms". The obtained results allowed to infer that the analyzed knowledge domain predominantly incorporates personal data, whether in its behavioral dimension or in the scope of the so-called sensitive data. These data are susceptible to the action of algorithms of different orders, such as relevance, filtering, predictive, social ranking, content recommendation and random classification. Such algorithms can have discriminatory biases in their programming related to gender, sexual orientation, race, nationality, religion, age, social class, socioeconomic profile, physical appearance, and political positioning.
  11. Chan, M.; Daniels, J.; Furger, S.; Rasmussen, D.; Shoemaker, E.; Snow, K.: ¬The development and future of the cataloguing code of ethics (2022) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 1149) [ClassicSimilarity], result of:
          0.008834538 = score(doc=1149,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 1149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1149)
      0.16666667 = coord(1/6)
    
    Abstract
    The Cataloguing Code of Ethics, released in January 2021, was the product of a multi-national, multi-year endeavor by the Cataloging Ethics Steering Committee to create a useful framework for the discussion of cataloging ethics. The six Cataloging Ethics Steering Committee members, based in the United States, United Kingdom, and Canada, recount the efforts of the group and the cataloging community leading up to the release of the Code, as well as provide their thoughts on the challenges of creating the document, lessons learned, and the future of the Code.
  12. Tran, Q.-T.: Standardization and the neglect of museum objects : an infrastructure-based approach for inclusive integration of cultural artifacts (2023) 0.00
    0.0012881019 = product of:
      0.007728611 = sum of:
        0.007728611 = weight(_text_:in in 1136) [ClassicSimilarity], result of:
          0.007728611 = score(doc=1136,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 1136, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1136)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper examines the integration of born-digital and digitized content into an outdated classification system within the Museum of European Cultures in Berlin. It underscores the predicament encountered by smaller to medium-sized cultural institutions as they navigate between adhering to established knowl­edge management systems and preserving an expanding array of contemporary cultural artifacts. The perspective of infrastructure studies is employed to scrutinize the representation of diverse viewpoints and voices within the museum's collections. The study delves into museum personnel's challenges in cataloging and classifying ethnographic objects utilizing a numerical-alphabetical categorization scheme from the 1930s. It presents an analysis of the limitations inherent in this method, along with its implications for the assimilation of emerging forms of born-digital and digitized objects. Through an exploration of the case of category 74, as observed at the Museum of European Cultures, the study illustrates the complexities of replacing pre-existing systems due to their intricate integration into the socio-technical components of the museum's information infrastructure. The paper reflects on how resource-constrained cultural institutions can take a proactive and ethical approach to knowl­edge management, re-evaluating their knowl­edge infrastructure to promote inclusion and ensure adaptability.
  13. Slota, S.C.; Fleischmann, K.R.; Greenberg, S.; Verma, N.; Cummings, B.; Li, L.; Shenefiel, C.: Locating the work of artificial intelligence ethics (2023) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 899) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=899,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 899, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=899)
      0.16666667 = coord(1/6)
    
    Abstract
    The scale and complexity of the data and algorithms used in artificial intelligence (AI)-based systems present significant challenges for anticipating their ethical, legal, and policy implications. Given these challenges, who does the work of AI ethics, and how do they do it? This study reports findings from interviews with 26 stakeholders in AI research, law, and policy. The primary themes are that the work of AI ethics is structured by personal values and professional commitments, and that it involves situated meaning-making through data and algorithms. Given the stakes involved, it is not enough to simply satisfy that AI will not behave unethically; rather, the work of AI ethics needs to be incentivized.
  14. Martin, K.: Predatory predictions and the ethics of predictive analytics (2023) 0.00
    0.0010517307 = product of:
      0.006310384 = sum of:
        0.006310384 = weight(_text_:in in 946) [ClassicSimilarity], result of:
          0.006310384 = score(doc=946,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 946, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=946)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, I critically examine ethical issues introduced by predictive analytics. I argue firms can have a market incentive to construct deceptively inflated true-positive outcomes: individuals are over-categorized as requiring a penalizing treatment and the treatment leads to mistakenly thinking this label was correct. I show that differences in power between firms developing and using predictive analytics compared to subjects can lead to firms reaping the benefits of predatory predictions while subjects can bear the brunt of the costs. While profitable, the use of predatory predictions can deceive stakeholders by inflating the measurement of accuracy, diminish the individuality of subjects, and exert arbitrary power. I then argue that firms have a responsibility to distinguish between the treatment effect and predictive power of the predictive analytics program, better internalize the costs of categorizing someone as needing a penalizing treatment, and justify the predictions of subjects and general use of predictive analytics. Subjecting individuals to predatory predictions only for a firms' efficiency and benefit is unethical and an arbitrary exertion of power. Firms developing and deploying a predictive analytics program can benefit from constructing predatory predictions while the cost is borne by the less powerful subjects of the program.

Languages

Types

  • a 11
  • m 3