Human Experience and AI Regulation

What European Union Law Brings to Digital Technology Ethics

Authors

DOI:

https://doi.org/10.34669/WI.WJDS/3.3.8

Keywords:

Artifical Intelligence, European Union, AI Act, Ethics, Regulation

Abstract

Although nearly all artificial intelligence (AI) regulatory documents now reference the importance of human-centering digital systems, we frequently see AI ethics itself reduced to limited concerns, such as bias and, sometimes, power consumption. Although their impacts on human lives and our ecosystem render both of these absolutely critical, the ethical and regulatory challenges and obligations relating to AI do not stop there. Joseph Weizenbaum described the potential abuse of intelligent systems to make inhuman cruelty and acts of war more emotionally accessible to human operators. But more than this, he highlighted the need to solve the social issues that facilitate violent acts of war, and the immense potential the use of computers offers in this context. The present article reviews how the EU’s digital regulatory legislation—well enforced—could help us address such concerns. I begin by reviewing why the EU leads in this area, considering the legitimacy of its actions both regionally and globally. I then review the legislation already protecting us—the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act—and consider their role in achieving Weizenbaum’s goals. Finally, I consider the almost-promulgated AI Act before concluding with a brief discussion of the potential for future enforcement and more global regulatory cooperation.

Metrics

Metrics Loading ...

Published

31-12-2023

How to Cite

Bryson, J. J. (2023). Human Experience and AI Regulation: What European Union Law Brings to Digital Technology Ethics. Weizenbaum Journal of the Digital Society, 3(3). https://doi.org/10.34669/WI.WJDS/3.3.8