Editorial

Sustainable Artificial Intelligence – Critical and Constructive Reflections on Promises and Solutions, Amplifications and Contradictions

Authors

Now what we have to fear is that inherently human problems – for example, social and political problems – will increasingly be turned over to computers for solutions. And because computers cannot, in principle, ask value-laden questions[, t]he most important questions will never be asked.

— Joseph Weizenbaum 1

The artificial intelligence (AI) developments have experienced many ups and downs since the field’s infancy in the early 1960s. However, recent years have seen the development of surprisingly powerful AI systems, enabled by an abundance of computational power and ubiquitous data collection from various areas of interest, covering micro-social interactions and environmental data alike. Although traditional challenges in the times of “[g]ood old-fashioned AI” (Haugeland, 1985) focused rather narrowly on problems such as identifying objects in images or playing board games, recent AI hype has been touting machine learning as a silver bullet for any kind of social, ecological, political, scientific, or economic problem. Notably, critical scrutinization of AI developments, especially its potential implications for society and the environment, has not been cultivated at an equal pace. This imbalance leaves ample space for less-than-reflective belief in technological progress and “techno solutionism” (van Wynsberghe, 2021; Luers et al., 2024; Morozov, 2013). Despite the many promising sustainability-related applications of AI (cf. Santarius and Wagner, 2023), including pollution detection (Pouyanfar et al., 2022) and earth observation (Bereta, 2018), scholarly and societal debate has rarely focused on the resource consumption associated with training and operating these models (Wu et al., 2022). Indeed, the computational resources needed to train AI models doubled every 3.4 months between 2012 and 2018 (OpenAI, 2018), with energy, water, and material requirements rising dramatically at the same time. The use of renewable energy for data centers remains rare (Khan et al., 2022), and if available, using it for AI diminishes its potential for other uses.

Sustainable AI, as a field of research and academic discussion, concerns the entire life cycle of AI and the sustainability of AI’s design, data gathering, training, development, validation, re-tuning, implementation, and use (van Wynsberghe, 2021). However, sustainability, in its most referenced form, means “development that meets present needs without compromising the ability of future generations to meet their own needs” (Brundtland Commission, 1987). Yet, discussion of fulfilling current needs without undermining future welfare consideration of the questions of whose needs and whose future welfare are at stake (Thiele, 2016), which urges a review of the power structures and actors responsible for the entire AI ecosystem, including the influential claims and concrete consequences around AI (Stilgoe, 2023). Furthermore, the concept of sustainability is conventionally considered to rest on three pillars: an ecological pillar, an economic pillar, and a social pillar (Purvis et al., 2019). Critical research into the sustainability of AI often focuses on the environmental dimension, addressing, for example, greenhouse gas emissions or resource use. Social aspects of AI’s sustainability have only received increased attention in the past few years, despite seven of the United Nation’s seventeen Sustainable Development Goals (SDGs) – established in 2015 as part of their “2030 Agenda for Sustainable Development” framework – concerning social domains (e.g., health, well-being, education, gender equality, work conditions, inequalities, peace, and justice).

The question of social sustainability has notably been brought up in relation to AI’s global supply chains and applications. For instance, labor conditions on the mining sites and assembly lines required to produce AI hardware have been criticized (Williams, 2022) alongside the often precarious and unhealthy workplaces – often located in the Global South – associated with data cleaning and annotation (Miceli et al., 2020; Roberts, 2019; Gillespie, 2018). Elsewhere, criticism has circled around the detrimental effects of AI on social and economic inequality and the reinforcement of racialized and gendered forms of oppression (Noble, 2018; O’Neil, 2016). The discourse surrounding the AI market and business practices (Gorwa & Veale, 2024) and ambiguous technological outcomes (Weber, 2024) completes the big picture. Ultimately, “AI is neither intelligent nor artificial” (Crawford, 2021, p. 69), because there is always subjective non-automated human decision-making and non-artificial human labor at the core of AI.

Those kinds of misconceptions and the need to regularly call them out (Rehak, 2021) make meaningful discussions around sustainable AI even more difficult. Mischaracterizing (sustainable) AI leads not only to superficial academic and political debates but also to dangers of placebo AI solutions that do not actually tackle real problems and perhaps even worsen them (WBGU 2019, p. 4). Given that AI represents various high-tech products that are often fully understood only by experts, grasping its implications is a complex task prone to the influence of powerful interests. The phenomena of so-called AI greenwashing and technofixing demonstrate how stakeholders can mislead or be misled by seemingly sustainability-oriented products and practices that hide their unsustainable features (for an overview on general greenwashing techniques see de Freitas Netto et al. 2020) or trigger economy-wide digital rebound effects (Kunkel & Tyfield, 2021).

To inform and advance the debate around sustainability-oriented AI and the sustainability of AI itself, we have compiled this thematic issue consisting of four research articles and two opinion pieces to offer reflections on the promises, solutions, amplifications, and contradictions created by introducing AI into sustainability-related endeavors and sustainability-related applications into AI development.

The topics covered in this thematic issue reflect that the discourse on sustainable AI takes place on many different levels. It addresses questions of individual behavior and education, as well as systemic societal questions about power and techno-solutionism, concrete questions about AI training and collaboration, and questions concerning AI myths and abstract imaginaries. Each of the six papers presents recent theoretical and empirical research addressing complex challenges as well as considering the current and future needs of the field.

In his paper, Paul Schütze (2024) explores the power structures and socio-economic dynamics behind employing AI technologies and emphasizes that the promises of sustainable AI largely rely on mystification and the technological solutionism paradigm’s fixation on efficiency gains and technological progress, ultimately reproducing the hegemonic status quo.

Sami Nenno (2024) provides an in-depth investigation of the potentials and limitations of Active Learning (AL) for decreasing the training data size of machine learning algorithms (an emissions reduction technique). The experiments demonstrate that reducing data size outweighs the computational costs associated with AL in only limited scenarios.

Stefan Ullrich and Reinhardt Messerschmidt (2024) emphasize the need for a critical AI literacy that can be substantially obtained via experience-based learning. They provide an overview of existing options that foster reflected usage and understanding of the notion of AI, concluding with a hands-on example.

Marja-Lena Hoffmann et al. (2024) quantify the net global-warming potential of an online shopping recommendation system that nudges towards sustainable consumption decisions. Their article demonstrates how emissions can be avoided using AI-enhanced systems to get closer to sustainable shopping behaviors.

Hartmut Hirsch-Kreinsen und Thorben Krokowski (2024) critically discuss the promises of artificial intelligence, which appear clear and compelling but are actually ambiguous, depending on vague metaphors, misconceptions, and exaggerations and drawing their persuasive power from the longstanding myth of the intelligent machine.

Finally, Peter Buxmann and Sara Ellenrieder (2024) argue that although rapid advances in AI have heightened anticipation of transformative automation, the focus of research should shift to effective human-AI collaboration, which requires explainable system decisions and effective human oversight, especially in high-risk areas, as emphasized by the EU’s AI Act.

We believe that there is an urgent need for a differentiated and critical-analytical view of sustainable AI – specifically the ecological, social, and discursive issues – that moves beyond the narrative of technological solutionism and operates on an informed and empirical basis. The presented articles push the discourse in this direction.

References

Bereta, K., Koubarakis, M., Manegold, S., Stamoulis, G., & Demir, B. (2018). From big data to big information and big knowledge: the case of earth observation data. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ‘18). Association for Computing Machinery, New York, NY, USA, 2293–2294.https://doi.org/10.1145/3269206.3274270.

Brundtland Commission (1987). Our common future. In Towards sustainable development. https://un-documents.net/ocf-02.htm#I

Crawford, K, (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

de Freitas Netto, S. V., Sobral, M. F. F., Ribeiro, A. R. B., & Soares, G. R. D. L. (2020). Concepts and forms of greenwashing: A systematic review. Environmental Sciences Europe, 32, 1–12.

Gillespie, Tarleton (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Gorwa, R, & Veale, M. (2024). Moderating model marketplaces: Platform governance puzzles for AI intermediaries. Law Innovation and Technology 16(2). dx.doi.org/10.2139/ssrn.4716865

Haugeland, J. (1985). Artificial intelligence: The very idea. MIT Press.

Khan, S. A. R., Yu, Z., Umar, M., Lopes de Sousa Jabbour, A. B., & Mor, R. S. (2022). Tackling post-pandemic challenges with digital technologies: An empirical study. Journal of Enterprise Information Management, 35(1), 36–57.

Kunkel, S., & Tyfield, D. (2021). Digitalisation, sustainable industrialisation and digital rebound – Asking the right questions for a strategic research agenda. Energy Research & Social Science, 82, 102295.
https://doi.org10.1016/j.erss.2021.102295.

Luers, A., Koomey, J., Masanet, E., Gaffney, O., Creutzig, F., Lavista Ferres, J., & Horvitz, E. (2024). Will AI accelerate or delay the race to net-zero emissions? Nature, 628(8009), 718–720.

Miceli, M., Schuessler, M., & Yang, T. (2020). Between subjectivity and imposition: Power dynamics in data annotation for computer vision. Proceedings of the ACM on Human-Computer Interaction 4.CSCW2 (2020): 1-25.
https://doi.org/10.1145/3415186.

Morozov, E. (2013). To save everything, click here: The folly of technological solutionism (First edition). PublicAffairs.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). Crown.

OpenAI. (2018, May 16). AI and compute. https://openai.com/blog/ai-and-compute/

Pouyanfar, N., Harofte, S.Z., Soltani, M., Siavashy, S. Asadian, E., Ghorbani-Bidkorbeh, F., … & Hussain, C. (2022). Artificial intelligence-based microfluid platforms for the sensitive detection of environmental pollutants: Recent advances and prospects. Trends in Environmental Analytical Chemistry, 34, e00160

Purvis, B., Mao, Y, & Robinson, D. (2019). Three pillars of sustainability: in search of conceptual origins. Sustainability Science, 14, 681–695

Rehak, R. (2021). The language labyrinth: Constructive critique on the terminology used in the AI discourse. In Verdegem, P (ed.), AI for Everyone? University of Westminster Press. https://doi.org/10.16997/book55.f

Roberts, S. T. (2019). Behind the Screen. Yale University Press.

Santarius, T., & Wagner, J. (2023). Digitalization and sustainability: A systematic literature analysis of ICT for Sustainability research. GAIA-Ecological Perspectives for Science and Society, 32(1), 21–32.

Stilgoe, J. (2023). We need a Weizenbaum test for AI. Science 381(6658). https://doi.org/10.1126/science.adk0176

Thiele, L. P. (2016). Sustainability. John Wiley & Sons.

Van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213–218.

WBGU – German Advisory Council on Global Change (2019): Towards Our ­Common Digital Future. Flagship Report. WBGU. https://www.wbgu.de/en/publications/publication/towards-our-common-digital-future

Weber, J. (2024). Autonomous drone swarms and the contested imaginaries of artificial intelligence. Digital War, 5(1), 146-149.

Williams, A., Miceli, M., & Gebru, T. (2022). The exploited labor behind artificial intelligence. Noema Magazine, 22.

Wu, C. J., Raghavendra, R., Gupta, U., Acun, B., Ardalani, N., Maeng, K., & Hazelwood, K. (2022). Sustainable AI: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems, 4, 795–813.

Funding Statement

This work was partly funded by the Federal Ministry of Education and Research of Germany (BMBF) under grant no. 16DII131.


  1. 1 From an interview with Long, M. (1985). The turncoat of the computer revolution. New Age Journal 5, 49 – 51, as quoted in Milbrath, L. W. (1989) Envisioning a sustainable society: Learning our way out. SUNY Press. 257.

Metrics

Metrics Loading ...

Downloads

Published

16-07-2024

Issue

Section

Research Papers