Human Experience and AI Regulation
What European Union Law Brings to Digital Technology Ethics
DOI:
https://doi.org/10.34669/WI.WJDS/3.3.8Keywords:
European Union, AI Act, Ethics, Regulation, Artificial IntelligenceAbstract
Although nearly all artificial intelligence (AI) regulatory documents now reference the importance of human-centering digital systems, we frequently see AI ethics itself reduced to limited concerns, such as bias and, sometimes, power consumption. Although their impacts on human lives and our ecosystem render both of these absolutely critical, the ethical and regulatory challenges and obligations relating to AI do not stop there. Joseph Weizenbaum described the potential abuse of intelligent systems to make inhuman cruelty and acts of war more emotionally accessible to human operators. But more than this, he highlighted the need to solve the social issues that facilitate violent acts of war, and the immense potential the use of computers offers in this context. The present article reviews how the EU’s digital regulatory legislation—well enforced—could help us address such concerns. I begin by reviewing why the EU leads in this area, considering the legitimacy of its actions both regionally and globally. I then review the legislation already protecting us—the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act—and consider their role in achieving Weizenbaum’s goals. Finally, I consider the almost-promulgated AI Act before concluding with a brief discussion of the potential for future enforcement and more global regulatory cooperation.
Metrics
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Joanna J. Bryson (Author)
This work is licensed under a Creative Commons Attribution 4.0 International License.