The Problem of Sustainable AI
A Critical Assessment of an Emerging Phenomenon
1 Introduction
Proponents of the currently in-vogue notion of sustainable Artificial Intelligence (AI) contend that AI technologies can play an important part in tackling the climate crisis. For instance, AI applications may contribute to reducing CO2 emissions or handling resources more efficiently in areas such as industry, farming, building, city planning, and transportation (Nishant et al., 2020; Cowls et al., 2023; Brevini, 2021b; Rolnick et al., 2019). Hence, for some, the objectives of sustainable AI appear quite simple: “to harness the potential of AI for understanding and combatting climate change” and to do so “in ways that are ethically sound” while minimizing “AI’s carbon footprint” (Cowls et al., 2023, p. 299). Although it is acknowledged that “AI technologies cannot solve all problems,” this does not tarnish the strong belief that “they can help to address the major challenges, both social and environmental, facing humanity today” (Cowls et al., 2021, p. 114). It seems that sustainable AI represents an inevitable step towards a climate-friendly future.
In opposition to these views, this paper questions the very conception of sustainable AI. Contrary to its perceived role as a solution, I argue that sustainable AI inadvertently perpetuates and reinforces existing structures of socio-economic exploitation and exclusion. Building on critical technology and AI studies as well as materialist thinking, I shed light on the hegemonic narratives, visions, and beliefs that ground the technological horizon of this conception. Thus adding to a critical theory of technology, the central aim of this paper is to “demystify the illusion of technical necessity” with which sustainable AI is presented and “expose the relativity of the prevailing technical choices” (Feenberg, 1999, p. 87). The pursuit of sustainable AI is not driven by benevolent intentions for resource-friendly and ethical solutions but represents a contingent choice that emerges as the technological manifestation of the current socio-economic horizon.
Laying out this argument, the paper proceeds as follows. First, I survey the concept of sustainable AI. Second, I demystify the narratives, visions, and beliefs behind the concept by sketching three epistemological dimensions to analyze AI: AI-as-technology, AI-beyond-technology, and AI-as-ideology. Grounded in science and technology studies (STS) and philosophy of technology (PhilTech), this threefold distinction lays the foundations for a critical assessment of sustainable AI and details the ways that this conception reinforces the existing status quo. Third, I expand these insights and identify the techno-solutionist and economic narratives that facilitate the current enthusiasm for sustainable AI. Finally, I demonstrate how this enthusiasm inadvertently co-opts genuine climate solutions, creating the illusion of moving forward while being stuck within a system of economic, environmental, and social exploitation. The inevitable conclusion is that a climate-friendly future demands a different path.
2 Unpacking Sustainable AI
As discussed, sustainable AI can be considered an umbrella term that captures the general practices of using “artificial intelligence and machine learning […] to tackle sustainability issues and the global climate crisis” (Falk & van Wynsberghe, 2023, p. 1). For instance, according to a recent meta-study by Vinuesa et al. (2020) concerning the role of AI in achieving the Sustainable Development Goals (SDGs), AI technologies might be employed to achieve internationally agreed-upon standards of sustainability (see also Cowls et al., 2021). This means developing AI applications in a manner that facilitates environmentally, socially, and economically beneficial progress (van Wynsberghe, 2021) by, for example, integrating AI solutions into different sectors (e.g., healthcare, education, and energy) while attending to ethical considerations and minimizing negative impacts.
Notably, major tech corporations such as Google are at the forefront of this pursuit, leveraging their resources, capital, and self-proclaimed “deep legacy in research and the breakthroughs […] in AI to accelerate innovation that can tackle climate change.” (Google Sustainability, n.d.) Underpinning this effort are their narratives.1 For example, Sundar Pichai, the CEO of Google’s parent company Alphabet, has boldly claimed that AI technologies currently represent “the most profound thing we’re working on as humanity,” supposedly “more profound than fire or electricity” (Thomson & Bodoni, 2020, para. 2).2
Simultaneously to these corporate narratives, AI researchers are voicing concerns and calling for an ethically sound and resource-friendly use of AI technologies. This indicates the goal of “develop[ing] shared principles and legislation among nations and cultures […] to shape a future in which AI positively contributes to the achievement of all the SDGs.” (Vinuesa et al., 2020, p. 7) This marks a division within the conception of sustainable AI, resulting in two opposing sides. On the one hand, AI technologies are to be utilized for sustainable goals; on the other hand, their harms must be controlled (Falk & van Wynsberghe, 2023, p. 1). In her seminal paper “Sustainable AI: AI for sustainability and the sustainability of AI,” Aimee van Wynsberghe captures this dichotomy by labeling these two dimensions AI for sustainability and the sustainability of AI (van Wynsberghe, 2021).
The goal of “AI for sustainability” is to “explore the application of AI to achieve sustainability in some manner of speaking, for example, [by using] AI and machine learning (ML) to achieve the […] SDGs” (van Wynsberghe, 2021, p. 214) In these cases, AI technologies are specifically explored and used as tools to contribute to climate solutions. The spirit of AI for sustainability seems to be that “of a new marriage, between the Green of our habitats –natural, synthetic, and artificial […] – and the Blue of our digital technologies, from mobile phones to social platforms, from the Internet of Things to Big Data, from AI to future quantum computing” (Floridi & Nobre, 2020).
However, van Wynsberghe highlights that this branch “fails to account for the environmental impact from the development of AI” (van Wynsberghe, 2021, p. 214). Therefore, we also need to talk about the “sustainability of AI”, with a focus on “sustainable data sources, power supplies, and infrastructures as a way of measuring and reducing the carbon footprint from training and/or tuning an algorithm” (van Wynsberghe, 2021, p. 214). This component entirely concerns the impacts and effects of the technologies used.
In this context, van Wynsberghe notes the common understanding of sustainability, recognizing that there are “three pillars upon which the concept of sustainable development rests: economic sustainability, social sustainability, and environmental sustainability” (van Wynsberghe, 2021, p. 215). Here, she takes up the so-called Brundtland definition of sustainable development (see the World Commission on Environment and Development, 1987). She goes on to explain that any sustainable development that fulfils these three categories must aim to satisfy the needs of current societies while also preserving the ability of future generations to meet their needs. This requires, for instance, using natural capital sustainably, considering biodiversity and ecological integrity, and improving political, cultural, health, and educational systems (see also Mensah, 2019). Regarding the sustainable development of AI, this specifically highlights the tension between innovation – for example, new applications being constantly in the making – and equitable resource distribution – for example, some communities being left out of or discriminated against by these applications – as well as the tension between simultaneously serving the needs of the environment, economy, and society. In a nutshell, this means that if one wants to discuss a truly sustainable AI, none of its developments should lead to unsustainability in any of these pillars (van Wynsberghe, 2021, p. 215).3
With this important distinction between “AI for sustainability” and “the sustainability of AI,” van Wynsberghe underscores the inherent tension within the idea of using AI technologies for sustainability. Taking this observation as a point of departure, I argue that for a critical assessment of the concept of sustainable AI, we need to not only question the sustainability of AI. But we need to more closely examine the assumptions and dominant interest that background the very idea of combining sustainability and AI. In the spirit of a critical philosophy of technology, I contend that it is crucial to identify the hegemonic structures that manifest within sustainable AI (e.g., Feenberg, 1999, 2022). Accordingly, the next section probes the implicit suppositions that shape current debates on sustainability and AI.
3 Three Dimensions of AI: Technology, Structure, and Ideology
Evocations of sustainable AI include certain implicit and underlying assumptions that are seldomly discussed. To demystify these images, I elaborate an understanding of AI along three dimensions: (1) AI-as-technology, (2) AI-beyond-technology, and (3) AI-as-ideology.4 These three dimensions are built upon key theoretical perspectives on technology identified from the interwoven fields of STS and PhilTech. The first, AI-as-technology, represents the naive instrumentalist point of view. According to Andrew Feenberg (1999), this perspective “assumes both the possibility of human control and the neutrality of technology” (p. 9). Accordingly, technology is perceived as a neutral tool that is unidirectionally controlled and used to achieve certain ends. Simplified, one might say that this functions within critical STS and PhilTech mainly as a reference point for criticism and nuance, delivering the critique that technology is neither a neutral tool nor a self-evident means of achieving certain ends.5 Next, AI-beyond-technology is grounded in the socio-technical perspective elaborated mainly in STS that lays bare how technology and society are always already intertwined. That is, AI is neither a neutral tool nor unidirectionally controlled by humans. Finally, AI-as-ideology relies on the broader cultural-ideological perspective elaborated within PhilTech, most notably by Feenberg, to examine how cultural values and power relations determine the fate of a technology. As these dimensions broadly build upon prevailing conceptual views within a critical theory of technology, they provide a fruitful theoretical foundation for an intervention in the discourse on sustainable AI. Specifically, they clarify how AI is usually addressed in regards to sustainability (AI-as-technology), what is often sidelined in doing so (AI-beyond-technology), and why this idea finds such traction despite the concerns detailed here (AI-as-ideology).
AI-as-technology
Current debates around sustainable AI mostly focus on the technological aspects of AI, that is, AI-as-technology (see, e.g., Floridi et al., 2018).6 Concretely, these debates are focused on bringing AI technologies together with sustainability, with the core theme mostly to use them as solutions for problems of sustainability. This focus on AI as an instrumental technology ignores societal issues and the social embeddedness of technologies. Problems within the use of a technology are seen as fixable by design (Feng & Feenberg, 2008). This means that this perspective is characterized by an excessive focus on how best to use AI technologies. The question here always seems to concern providing efficient solutions to problems of sustainability, while neglecting the political and social issues of technologies.
This naïve instrumentalist view sees technologies as a mere means to an end and understands them as supposedly neutral instruments (cf. Feenberg, 1999). Accordingly, much talk on AI is based on the assumption that this is “just” a tool that can be harnessed, with problems that can be addressed using technological fixes. As such, AI is mainly addressed as a “radical new technology” (Waelen, 2022, p. 4) that is viewed as, for instance, a program, an autonomous system, a learning algorithm, or a decision-making tool. Concrete instances of such AI technologies include ChatGPT, social media content algorithms, and facial recognition systems. Prominent examples of this narrative appear in, for instance, the work of Luciano Floridi et al. concerning “AI4People” (2018), which, among other things, assesses “the opportunities and associated risks that AI technologies offer for fostering human dignity and promoting human flourishing” (Floridi et al., 2018, p. 690). This means that the AI-as-technology dimension essentially denotes a set of technological and methodological practices and tools that are branded as AI. This technological view addresses developments in machine learning, data science, and statistics, and it focuses on the concrete localities and applications of AI systems. In other words, referring to AI-as-technology acknowledges the very specific technologies, computational methods, algorithms, and systems that can be observed in AI’s use cases. Importantly, this roughly aligns with “AI for sustainability,” which explicitly focuses on applying technologies to achieve certain goals.
AI-beyond-technology
Recent research shows that AI cannot be understood simply as a technology because it inherently involves a large socio-economic context (see, e.g., Brevini, 2021b; Crawford, 2021; Mühlhoff, 2020b). Within STS and, in particular, within actor-network theory, numerous case studies have demonstrated that the operation of any technology inherently relies upon an extensive socio-material-economic network, where the material infrastructure seamlessly intertwines with social norms and economic imperatives (see, e.g., Callon, 1984; Latour, 2007; Pinch & Bijker, 1984). In this way, the AI-beyond-technology dimension corresponds to a socio-technical perspective that is prominent in the critical works within STS emphasizing the social embeddedness of technologies at large (see, e.g., Jasanoff, 2015).
Following this theoretical tradition, it becomes clear that, like any other technology, AI is a large and complex phenomenon, an extensive network encompassing diverse socio-material elements. Therefore, viewing AI-beyond-technology, puts into the center that AI is not only an algorithm but a network encompassing a bouquet of methods and practices. It is not a singular technology but instead depends on large-scale, socially embedded, and manufactured datasets, on massive computing capacities provided by powerful companies in specific locations, and on human knowledge usually attached to a small group of people characterized by a unique socio-cultural background (see, e.g., Crawford, 2021; D’Ignazio & Klein, 2020; Jarrett, 2022). The AI technologies we are talking about today have grown out of and are always based on this vast network of different elements that necessarily constitute and secure their functioning.
A view of AI-beyond-technology has recently started to appear in the public discourse surrounding the environmental effects of AI (see, e.g., Dhar, 2020; Sattiraju, 2020), but it is much more prominent within critical perspectives on AI or the ethics of AI (see, e.g., Crawford, 2021; Dubber et al., 2020; Mühlhoff, 2020b). For instance, Kate Crawford recognizes that AI describes a historical field, an infrastructure, and a network of social and material relations (2021). Here, she describes AI as the arrangement of technological methods in connection to their socio-political origin as well as their material grounding. In this regard, Crawford discusses AI as a “megamachine, a set of technological approaches that depend on industrial infrastructures, supply chains, and human labor that stretch around the globe but are kept opaque” (Crawford, 2021, p. 48). Similarly, Rainer Mühlhoff (2020b) views AI as a historical condition producing a media-culture dispositive. AI-beyond-technology demonstrates that artificial intelligence is never just a technology or tool but must always be viewed as an arrangement, a socio-material network that only exists and functions as the result of all of its parts coming together (see also Sartori & Bocca, 2023).
Notably, this dimension closely corresponds to van Wynsberghe’s concept of the sustainability of AI. The focus here extends beyond simply looking at AI as a tool and explicitly encompasses the network behind the technologies. Consequently, this perspective plays a pivotal role in the discourse on sustainable AI as its highlights, for instance, the environmental impact of the AI industry, including its water use, resource consumption, and production of toxic e-waste, among others (see, e.g., Brevini, 2021b; Hao, 2019; Strubell et al., 2019; Syed, 2023). In this light, the AI-beyond-technology perspective serves as an initial step in a critical intervention into the discourse on sustainable AI. However, despite acknowledging this viewpoint, as van Wynsberghe’s influential paper emphasizes, the pursuit of harnessing AI to achieve sustainable goals persists. Continued advancements are now coupled with constant appeals for ethically sound and resource-friendly methods,7 raising the question: Why is this trajectory so resolute? Why is sustainable AI such a desired objective? And what is the rationale behind the consistent commitment to this path? Responding to these concerns, I turn to the third dimension of AI-as-ideology.
AI-as-ideology
The basis for the persistent belief in the good use of AI can be found in the ideological discourse surrounding AI (see, e.g., Brevini, 2021a). As Feenberg (1999) has argued, technologies do not rely solely on a socio-technical network. Instead, there exists an ideological superstructure that plays a crucial role in the evolution of technologies (Feenberg, 2005) – a complete cultural horizon enriched with narratives and visions that influence the development and use of technologies (Feenberg, 1999, p. 87). This brings with it a shift in perspective that requires one to zoom out from the socio-technical (beyond-technology) to the ideological (as-ideology) perspective.8
This shift is tellingly captured by Sheila Jasanoff’s (2015) concept of “socio-technical imaginaries” (p. 4). She defines socio-technical imaginaries as “a collectively held, institutionally stabilized and publicly implemented vision of desirable futures, animated by shared views on forms of social life and social order, attainable through and supportive of advances in science and technology” (Jasanoff, 2015, p. 4). With this concept, Jasanoff emphasizes how technologies are at once grounded in the practical socio-material reality of their current applications while at the same time fueled by future visions and forward-looking projections. Every technological advancement is enveloped by both a socio-technical and ideological context, making it impossible to disentangle technology from these pervasive influences. Instead, it must always be regarded in relation to the imaginaries attached to it. The reality of any technology is invariably shaped by the image of what it should (not) become according to certain contexts and particular powerful actors (Jasanoff, 2015).9
Regarding sustainable AI, corporate narratives and images suggest that great potentials lie within the use of, for instance, big data, predictive analytics, and generative AI. The contention is that if we can mitigate the negative impacts of AI developments and ensure their “ethically sound and sustainable” use, they bear the potential to be “a success that leads to a better society and a healthier planet” (Cowls et al., 2023, p. 303). This narrative is closely linked to late capitalist ideology and its reverence for the power of technology at large. In this context, the path towards sustainable AI is essentially an extension of this and the adjunct ideologies of techno-solutionism and techno-determinism (Brevini, 2021a; see also Morozov, 2013).
The techno-deterministic tale presents the future as predestined and inevitably dominated by AI applications, leaving the public, politics, and academia to discuss the challenges and opportunities of these technologies within the confines of an already predetermined set of possible futures. Here, the deterministic belief stretches between the “real” methods of machine learning and deep learning that are actually impacting societies and the “imaginary,” mythological touch that inflates the “real” effects and lures with great promises (Elish & boyd, 2018). Benedetta Brevini (2021a) calls this the always lingering and unspoken hope that “when the artificial machine arrives – in this future/present which is always inevitably imminent – it will manifest as a superior intelligence” that will “outsmart humans” and mend all of our problems (p. 155). The rationale behind the commitment towards sustainable AI “always waivers between the real and the imaginary” (Elish & boyd, 2018, p. 62) – between what AI technologies can actually do, and what they promise to be. This is the socio-technical and ideological imaginary of sustainable AI: While the “real” is at the core of employing sustainable AI, the “imaginary” already tends toward a rising future.
In this way, artificial intelligence becomes an ideology,10 fueling the belief that AI technologies can be “the solution to otherwise intractable social, political, and economic problems, and seem to promise efficiency, neutrality, and fairness” (Elish & boyd, 2018, p. 74). In the wake of this ideology, sustainable AI occurs as the “magic tool to rescue the global capitalist system from its dramatic failures” (Brevini, 2021a, p. 149), ignoring the fact that “the real” applications, such as smart grids and AI in healthcare, are but a drop in the ocean.11 Understood as a socio-technical imaginary, sustainable AI thus becomes the ideologically charged manifestation of society’s collective perception of what ought to be regarded as desirable – ethically sound and resource-friendly AI –and what is rendered as unthinkable – a world without AI technologies (Jasanoff, 2015, p. 4).
With this ideological twist, the critical debates concerning AI-beyond-technology are quickly hijacked by a techno-solutionist faith—after all, everyone can get on board with ethical and sustainable technologies. Critics and proponents alike align themselves toward the predestined future and toward AI-as-technology’s expected influences. All of this is backed by the authority of the “scientific, economic and political elites who control computer technology and claim a scientifically legitimated right to decide its future course of development.” (Berman, 1992, p. 112) This is the ideological blanket that cloaks the socio-technical imaginary of sustainable AI. Under this cover, however, sustainable AI represents a manifestation of the current cultural horizon, the trends and dominant ideas of modern capitalist societies.
This means that the conception of sustainable AI contains the very clearly observable “constraints of a technical code [that] produces a concrete device that ‘fits’ a specific social context” (Feng & Feenberg, 2008, p. 117). In other words, as Feng and Feenberg have elaborated, technologies always conform to the cultural background of society. That is, all technological choices emerge out of hegemonic power relations and specific socio-cultural conditions (Feng & Feenberg, 2008). From this, it follows that the prevailing technological choices are always contingent. Technological development is never the self-evident result of a “pure” pursuit for progress, efficiency, or innovation. Instead, it is always determined by a set of historically grown cultural, economic, and political values and practices (Feenberg, 1999; Feng & Feenberg, 2008).
This can be precisely observed in the case of sustainable AI when considering the dimension of AI-as-ideology. The current conception of sustainable AI is a contingent technological choice prompted by current hegemonic power relations and socio-cultural conditions. However, the ideologically charged socio-technical imaginary of sustainable AI hides the fact that its supposed solutions always remain within the historical confines of capitalism and simply reinforce hegemonic imperatives (Berman, 1992). This introduces yet another question: What are these hegemonic imperatives and the underlying assumptions that constitute the horizon of sustainable AI?
4 Demystifying Sustainable AI
Expanding on the aforementioned insights, this section takes a step toward uncovering the dominant interests and ideologies that fuel the current drive toward sustainable AI. I continue the critical assessment which started from the AI-beyond-technology perspective and gained further momentum by turning toward the narratives and interests manifested in AI-as-ideology. The aim now is to deconstruct these dominant interests and underscore their role in the contingent nature of the prevailing technological choices (see Feenberg, 1999). By expanding the analysis started in the previous section, I shed light on the ideology and the driving forces behind sustainable AI. Therefore, this section serves as the core argument of this paper: it demonstrates how the idea of sustainable AI is fundamentally rooted in an unsustainable socio-cultural horizon. In the process, I will delve into the connections between AI, ideology, and capitalism, with the goal of providing a clear and concise comprehension of how they interact in the quest for sustainable AI.
Techno-solutionism and the promise of (super-)intelligence
As we have seen, the current advancements of sustainable AI are fundamentally built on the ideological belief in AI’s potential to offer solutions, particularly in the context of sustainability and climate challenges. This techno-solutionist belief can be traced back to the very beginning of AI research and its now resurgent promise of (super-)intelligence. The study of AI started with the endeavor to create machines that can think or act like humans (Verdegem, 2021, p. 4). At the beginning of the 20th century, the idea of creating an AI, according to Alan Turing (1950), involved programming a machine in such a way that it may resemble human intelligence. Later in the 1950s, the term “artificial intelligence” was coined to introduce a research initiative that would attempt “to find how to make machines use language, form abstractions, and concepts, solve kinds of problems now reserved for humans, and improve themselves.” (McCarthy et al., 1955, p. 12) This marks the initial ambition to create a true AI.
However, the advancement of the field saw these original endeavors stretched and brought together with progress in related areas, including logic-based computational systems, statistics, data analysis, and database management (see, e.g., M. L. Jones, 2018; Joyce et al., 2021; Plasek, 2016). As such, the ideal of creating an intelligent machine became integrated with other fields, such as data science and statistics. With this, the advancements of AI technologies became part of the historical trajectories that have been labeled “data driven logics” (Joyce et al., 2021, p. 2; see also Joque, 2022), “datafication” (Cukier & Mayer-Schoenberger, 2013), a “fetishism for counting” (Hacking, 1982), and a “culture of prediction” (Jones, 2018, p. 674). These developments led to Cukier and Meyer-Schoenberger (2013) observing that data-driven AI technologies may broadly be viewed as “only the latest step in humanity’s quest to understand and quantify the world” (p. 34). Creating AI systems thus became entwined with the endeavor to collect ever more data and knowledge about the world.
In this regard, Campolo and Crawford (2020) have observed a faith and “discourse of an exceptional, enchanted, otherworldly and superhuman intelligence” that may provide unprecedented access to previously inaccessible knowledge as well as novel insights due to a “superhuman accuracy and performance” (pp. 9, 15). Exactly these hopes for the potential of an intelligent machine seem to express themselves particularly explicitly in the pursuit for sustainable AI (see Brevini, 2021a as quoted above). Specifically, when using AI for climate solutions, there is an underlying tone calling for fixes that are beyond human capabilities, solutions that only a machine could provide.12 This call seemingly implies that an AI with capacities beyond that of humans – not always in the sense of a general artificial intelligence but in terms of problem-solving and organizational skills based on massive amounts of data – can save humanity from the immanent catastrophe. By virtue of it being better than humans – that is, being better able to make decisions and instantiate organization – AI systems can supposedly help drive sustainable development.
The tale of techno-solutionism and the intelligent machine demonstrates how AI-as-technology swiftly becomes AI-as-ideology. What initially may have appeared to be a purely technological endeavor – and what is certainly still regarded as such (see, e.g., Sætra, 2023) – is, in fact, infused with ideological narratives and myths (see, e.g., Brevini, 2021a), with the “real” becoming the “imaginary.” Far from being an actual solution to societal problems, sustainable AI has become a mythological promise.
This points us toward the role of capitalist dynamics within the endeavor of sustainable AI. Although initially rather opaque, there is a close connection between AI, techno-solutionism, the promise of (super-)intelligence and capitalist imperatives. This begins with the historical trajectories of “datafication” and “the culture of prediction,” which can be partially traced to early systematic advances in data accumulation. These advances brought the field of data science together with the drive for efficiency of late industrial capitalism. As Craig Robertson (2021) has highlighted, at the end of the 19th and beginning of the 20th century, accumulating and evaluating data became central for businesses. Data has been used to analyze market trends and make informed decisions about investments, production, and other activities. By collecting data on consumer behavior, market conditions, and other factors, businesses aimed to make predictions about how different policies and actions would impact economic performance. The gathering, handling, analyzing, and circulating of data became increasingly important, always with the aim of better controlling production. Gathering data from production, sales, finance, and even from weather and crop records became a way of making the most efficient economic decisions. Planning based on data became essential for running day-to-day operations, with a focus on predicting the future to increase profits. As such, capitalist production in the 20th century came to revolve around data and efficiency: “efficiency emerged as the goal not only of modern business but also of the economy and society in general” (Robertson, 2021, p. 170).13 With this quest for efficiency also came the pursuit of knowing what to produce and when to produce it. In this vein, gathering data from all available sources became a central imperative in the capitalist endeavor to plan production efficiently, with “predicting the future [increasingly seen] as the path to profit” (Robertson, 2021, p. 170).
This model of capitalist organization became tightly linked to the practices of data collection and analysis, as well as to the idea that problems can be solved through technical means alone (Robertson, 2021, pp. 169 – 173). Connected to this historical context is the neo-liberal and techno-solutionist discourse that now warrants the quest for sustainable AI (Brevini, 2023; Jonsson & Mósesdóttir, 2023). As Feenberg (2010) recognizes, “technology is not merely instrumental to specific goals but shapes a way of life”; vice versa, it is also inscribed by a socio-economic horizon, specific interests, and ideologies (p. 67; see also Goeminne, 2013). As Jasanoff’s concept of socio-technical imaginaries establishes, the technological endeavor becomes part of and reinforces a larger ideological frame: AI-as-technology blends with AI-as-ideology. The use of certain technologies, such as AI, for sustainability, is thus warranted by an inherently capitalist horizon and its interests in efficiency, control, and productivity (Feenberg, 2005, 2010). As such, one might suggest that utilizing AI for sustainable purposes does exactly that, securing the dominant socio-economic interests of neo-liberal capitalism (Feenberg, 2005, p. 52). The next section analyzes the specifics of these interests in the case of sustainable AI.
Capitalist imperatives and the myth of “good” intentions
The pursuit of sustainable AI is not driven primarily by “good” intentions, such as the desire to create sustainable and ethical solutions; instead, it is first and foremost the material and technological manifestation of a specific socio-economic horizon. That is, sustainable AI is the technical solution to problems such as climate change from a capitalist and techno-solutionist vantage point. Having mapped the ideological background of techno-solutionism, the following discussion details the capitalist influence on sustainable AI, which can be observed most prominently in the connection between data and capital.
AI methods and big data practices represent essentially two sides of the same coin. Without data, there is no AI, and without AI applications, there is no use for the large amounts of data collected. That is, AI technologies correspond intrinsically to the culture of datafication. Often, the belief appears to be that “the machine [e.g., the AI system] consumes and learns from vast amounts of data” and that the resulting decisions are thus “informed by objective data and free of cognition biases and emotions” (Nishant et al., 2020, p. 2). However, it must be recognized that the neutrality or pure rationality supposedly enabled by a technological system is an ideological myth (Feenberg, 2005, pp. 51 – 52). Data has never been neutral, and because data needs at least a minimum level of interpretation to be meaningful at all, it never will be (D’Ignazio & Klein, 2020). Indeed, data collection embodies a certain way of approaching, organizing, and controlling the world that is firmly rooted in historical capitalist developments (see, e.g., Sadowski, 2019).
Present data-collecting methods can partly be traced back to the early advent of capitalism. For instance, Silvia Federici has described how already in the 16th and 17th centuries, during the processes of primitive accumulation,14 forms of generalized and systematized data collection started to gain traction, until they really took off in the 18th and 19th centuries. At that time, data collection was part of “the introduction of demographic recording (census-taking, recording of mortality, natality, marriage rates),” one of the earliest attempts to establish new standardized modes of governance and bureaucracy (Federici, 2004, p. 84). By collecting information about people’s lives and circumstances, governments and those in power were able to identify and address potential sources of unrest, prevent the development of large-scale social movements, and maintain the status quo. As such, the accumulation of data corresponded to the accumulation of labor and was early on appropriated by economic developments and capital interests. Collecting data became linked to the early evolutions of capitalist modes of organization, privatization, and labor harnessing. This close relation between capital and data has continued into the present day, with data continuously used as a means of exploitation and control (Federici, 2004).
From this perspective, data (at least in its current usage) is deeply intertwined with capital interests and dynamics. Jathan Sadowski (2019), for instance, has described the inherent drive for ever more accumulation in the modern data economy. Emphasizing the historical underpinnings sketched here, he explains that data is not something that already exists in the world waiting to be discovered. Instead, it is created for a specific purpose through the use of technology. That is, collecting data involves selecting and measuring specific aspects of the world that are deemed relevant while disregarding others, a selection process influenced by certain goals and assumptions. This has historically been the goal of controlling populations and organizing production for efficiency. In that way, the process of accumulating and using data is a contested process that involves making choices about what data to collect, how to interpret it, and how to use it (Sadowski, 2019, p. 2). Therefore, sustainable AI must always be situated within this political economy of data accumulation and be understood as a powerful facilitator that constantly restructures socio-material relations.
This long-cultivated historical influence of capitalist interests within the current developments of AI technologies can more recently be observed in the influence of large corporations. Again, sustainable AI is not the result of good intentions and well-meaning individuals but instead embodies a socio-economic rationale, an idea exemplified by the fact that the most recent wave of and “hype” about AI started around 2012. That year, AlexNet, a deep neural network algorithm developed by a research team in Toronto, won the ImageNet Large Scale Visual Recognition Challenge (Whittaker, 2021, p. 52; see also Dotan & Milli, 2019), “a benchmark in object category classification and detection on hundreds of object categories and millions of images” (Russakovsky et al., 2015). Even though the methodologies of machine learning and data science had already existed for a while and had been well researched (see, e.g., Wang & Raj, 2017), the success of AlexNet ushered in a new era of AI hype. Thus, although AlexNet did not use or introduce new methods or techniques (it relied on well-known deep learning methods), its success still marked an opening, the start of the deep learning boom, showcasing the capabilities of machine learning algorithms, specifically deep learning approaches, in conjunction with large amounts of data and the appropriate computational power (Whittaker, 2021, p. 52; see also Dotan & Milli, 2019). Suddenly, deep neural networks were picked up across various different domains (McQuillan, 2022, p. 13), with the term “AI” entering usage as a marketing symbol: “Tech companies quickly (re)branded machine learning and other data-driven approaches as AI, framing them as the product of breakthrough scientific innovation” (Whittaker, 2021, p. 52).
Thus, today’s discourses on AI predominantly refer to an idea of AI-as-technology, braced by the myths of AI-as-ideology, crucially influenced by the boom and branding of tech companies. Current conceptions of AI build on broad corporate commitments to data collection, increased processing power in the hands of these corporations, as well as machine learning and deep learning techniques in connection to specialized human skills (Elish & boyd, 2018, p. 61). That is, AI’s recent rise is the result of increasing corporate interests pushing technological advances and developing computationally intensive algorithms to expand possibilities for collecting, storing, and analyzing ever more data, valorizing this data, and increasing profits. This, of course, also applies to sustainable AI. The capital-backed contention is that AI technologies may provide novel climate solutions (see, e.g., Vinuesa et al., 2020). The ideological promises of (super-)intelligence and technological solutions derail any further doubt about this, firmly anchoring the idea of sustainable AI within a capitalist socio-economic horizon.
5 The Problem with Sustainable AI
Returning to the three pillars of sustainability – economic, social, and environmental – and viewing them in the context of the background provided here identifies unavoidable issues for the concept of sustainable AI. Although sustainable AI promises to provide solutions to climate problems and other sustainability challenges, it is crucial to recognize that this very conception originates out of historically cultivated capitalist imperatives, fundamentally built on an ideology of techno-solutionism. This marks a dilemma. Others have shown and argued that a “capitalist sustainable transition appears to be a mission impossible,” especially on the grounds of “fetishizing naïve techno-solutionism” (Jonsson & Mósesdóttir, 2023, p. 243). Hence, sustainable AI may represent a dead end for sustainability. Moreover, it may even lead us further from being truly sustainable or ethically sound, as Kate Crawford (2021) has highlighted: “AI relies on many kinds of extraction: from harvesting the data made from our daily activities and expressions, to depleting natural resources, and to exploiting labor around the globe so that this vast planetary network can be built and maintained” (p. 32). All in all, in each of the three dimensions, sustainable AI falls short.
AI is built on economic exploitation
AI’s economic exploitation is exemplified by the data-capital connection and the influence of big tech on AI’s developments. The leading corporations prioritize maximizing profits and gaining market dominance while sidelining truly ethical and sustainable concerns (see, e.g., Srnicek, 2016). In this environment, data extraction and valorization are the driving imperative of the current AI economy, which exploits and capitalizes on users (see Sadowski, 2019). Even the pipelines within which this data is processed are built on exploitation, as in the case of the underpayment and poor working conditions of data workers (Miceli et al., 2022). For instance, OpenAI’s massive success with ChatGPT was enabled by exploiting Kenyan workers, who were grossly underpaid and had to label toxic content and hate speech to build a safety system for the chatbot (Perrigo, 2023). More broadly, the economic exploitation by the AI industry manifests in socio-economic systems of oppression and discrimination: Everything from AI-based networked media to the supply chains of digital platforms and the production of hardware for AI technologies relies on exploitative labor frequently drawn from the Global South (see, e.g., Altenried, 2020; P. Jones, 2021). Socio-economically vulnerable groups are particularly exposed to and negatively affected by these effects and the implications of data aggregation (for detailed work on this subject, see, e.g., Eubanks, 2018; Kröger et al., 2021; Mühlhoff, 2019, 2020a; Noble, 2018; O’Neil, 2016).
AI is built on environmental exploitation
Benedetta Brevini has thoroughly highlighted the negative environmental impacts of AI in her book Is AI good for the planet? (Brevini, 2021b). From energy infrastructures and the environmental costs of data centers to the mining of rare earths and other materials for the production of AI hardware, the extensive water consumption, and the disposal of toxic e-waste, AI-beyond-technology is harming the environment (Brevini, 2021b, pp. 63 – 91). Besides these direct effects, AI also connotes other environmental harms. A perfect example of this is that in 2021, Jeff Bezos, the Amazon founder, one of the biggest players in AI development, traveled to space, a stunt seemingly at odds with his company’s stated aim of becoming CO2-neutral by 2040. Bezos’ rocket launch, essentially funded by the AI economy, took place at a time in which parts of the Amazon forest – one of the world’s largest CO2 sinks – were starting to emit CO2 rather than absorb it (Gatti et al., 2021). This paints an almost caricature-like picture of reality, with the new Amazon of the AI economy superseding the old Amazon forest.
AI is built on social exploitation
Beyond the socio-economic impacts, data-driven AI exploits and invades the private lives and spheres of individuals to maximize profit and accumulation. In her book The Age of Surveillance Capitalism, Shoshana Zuboff (2019) describes how Big Tech and the data imperative are affecting the lives of individuals, endangering human autonomy and dignity, and undermining democratic processes: “Individuals [become] the draconian quid pro quo at the heart of surveillance capitalism’s logic of accumulation, in which information and connection are ransomed for the lucrative […] data that fund its immense growth and profits” (p. 54). On a similar note, Thatcher et al. (2016) have described how data extraction embodies the colonization of the lifeworld and an accumulation of lived experience, with AI corporations extracting the (experiential) realities of individuals. In other words, AI drives a “commodification of parts of our lives that have never before been commodified” (Törnberg & Uitermark, 2021, p. 9).
This precludes AI – in its current form – being described as sustainable. The advancements of AI technologies are built on economic, environmental, and social exploitation, which goes to show that the solutions offered by sustainable AI from a capitalist and techno-solutionist standpoint are not real solutions. Instead, they pertain to the very problems they seek to fix. From this perspective, the concept of sustainable AI seemingly carries a form of problematic circularity, namely, the attempt to use an unsustainable tool to achieve sustainability. Considering this in the context of the background established in this paper, it becomes clear that AI-as-ideology is being used to sell AI-as-technology as a salve for capitalism’s inherent problems, ignoring the dimension of beyond-technology.
Sustainable AI seemingly promises that technology can fix the current failings of societies, thereby forgetting its own problems. In other words, the AI-as-ideology narrative suggests that AI-as-technology will engender a better society and a healthier planet while disguising the exploitative character of AI-beyond-technology. It seems that the system is being asked to fix itself. Granted, AI technologies may well provide good climate solutions, such as water management systems, efficient energy infrastructures, and traffic and transportation solutions (see, e.g., Cowls et al., 2023; Nishant et al., 2020). However, as we have seen, the social, economic, and environmental costs of these supposed solutions are considerable, meaning that the issues associated with AI-beyond-technology cannot be brushed aside. Relying on systems built on exploitation to fix the climate crisis seems to be a trajectory doomed to eventually arrive at a dead end. Sustainable AI is hiding behind its ideological visions, evoking AI as “a mythical, objective omnipotence,” when, in fact, “it is backed by real-world forces of money, power, and data” with considerable environmental, economic, and social costs attached (Powles & Nissenbaum, 2018).
Fixing a broken system from within
To demystify the technical necessity of sustainable AI produced by the current capitalist horizon, I shall build on the example given by Elizabeth Kolbert in Under a White Sky (2021). In the context of environmental crises, this case I use as an analogy to sustainable AI demonstrates how its very idea is a techno-solutionist manifestation par excellence. In this way, this showcases the success of AI-as-ideology.
In her book, Kolbert writes about the absurdities of the attempts to regain control over environmental crises, introducing the example of concerns about the biodiversity threats an invasive fish species, the Asian carp, poses to the Great Lakes in the US (Kolbert, 2021). The Asian carp was introduced to the US in the 1970s to control algae growth in aquaculture ponds. However, the fish escaped into nearby waterways and have since spread rapidly, causing extensive damage to native fish populations and ecosystems. Kolbert highlights the extreme efforts made to control the carp population, including the construction of an electric fence in the Chicago River to prevent the fish from passing through the barrier to reach the Great Lakes. The fence uses electric pulses to deter the fish from passing through, and it has so far been effective at preventing the carp from reaching Lake Michigan. However, the threat of the carp reaching the Great Lakes was only made possible because the flow of the Chicago River was actively changed by a massive reconstruction of the waterways carried out to meet the needs of the region’s industrial development. This process unintentionally created a way for the carp to reach the Great Lakes.
This example demonstrates the perversions of the capital-driven desire to control natural outcomes by technological means. As Kolbert makes apparent, we have reached a stage where we attempt to control the control of nature: “First you reverse a river. Then you electrify it.“ (Kolbert, 2021, p. 10)
As an analogy, this perfectly articulates the faith in sustainable AI. First, we exploit the planet; then, a technological (super-)intelligence will help fix it. As we have seen, AI technologies are manifestations of a socio-economic horizon under which they supposedly prevent the effects of capital-driven control fantasies from getting out of hand. For instance, plans to use AI technologies to “enable informed water planning policy and localized climate change adaptation strategies” or install “AI-powered forest monitoring-systems” (Brevini, 2021b, p. 30) are all rooted in the desire to regain control. In these cases, AI applications are supposed to re-govern processes that have gotten out of hand. After all, why monitor a forest if it had been healthy all along? Why try to make water systems more efficient if they had been sustainable and future-oriented from the beginning? Why use AI for SDGs if we could find political and social solutions instead? In these senses, deploying AI systems to optimize and govern climate action is the equivalent of reversing the river and then electrifying it to prevent harm.
Within this analogy, sustainable AI embodies the attempt to electrify the processes that currently destroy our planet: “If control is the problem, then, by the logic of the [Capitalocene], still more control must be the solution” (Kolbert, 2021, p. 27). Returning to Brevini, this perfectly demonstrates the ways that technology is portrayed as providing empowering solutions while, in reality, it is “naturalizing market-based solutions to every issue of governance” (Brevini, 2021b, p. 27). Here we see the power of AI-as-ideology, circling societies back to the point they are already at, creating the illusion of moving forward while being stuck in the status quo, attempting to regain control by virtue of capital-backed techno-solutionist imperatives. The hegemonic interests are clear: The system must be fixed from within, and sustainable AI is here to deliver.
6 Conclusion
This paper has demonstrated how the conception of sustainable AI as a potential solution to pressing global challenges – such as climate change – reinforces existing structures of exploitation and exclusion. Sustainable AI is the answer to these global challenges from a capitalist vantage point. In the words of Feenberg (2014), “Existing science and technology cannot transcend the capitalist world. Rather, they are destined to reproduce it by their very structure. They are inherently conservative […] because they are intrinsically adjusted to serving a social order” (p. 180). Aligning with this idea, the pursuit of sustainable AI is not driven by good intentions, such as the desire to create resource-friendly and ethical solutions, but is, first and foremost, the technological manifestation of a capitalist socio-economic horizon. The narrative that AI will save capitalism’s failures overshadows issues of social, economic, and environmental exploitation. Critical questions that pull back the curtain of AI-as-ideology and demystify the tales of technological necessity are rarely asked: Why are sustainable AI systems planned in the first place? What about other solutions? Who benefits from sustainable AI initiatives at the local, regional, and global levels? Are these technological solutions really necessary, or could there be better social and political answers?
These questions and the preceding analyses show that the field of sustainable AI rests on problematic assumptions and inadvertently hijacks genuine climate solutions. While the promises of sustainable AI carry the image of a shiny (super-)intelligence that saves humanity, this distracts from the fact that AI’s developments are largely backed by Big Tech and capital interests. This paper’s discussion has clearly articulated the nature of these hegemonic interests, recognizing that the “endgame is always to ‘fix’ AI systems, never to use a different system or no system at all” (Powles & Nissenbaum, 2018, para. 12). The idea that using an AI system might constitute the core of the problem is rarely discussed. Against this background, the ideal of harnessing “the positive and mitigate[e] the negative impact of AI on the environment” (Cowls et al., 2023, p. 303) appears preposterous. The conception of sustainable AI embodies a dominant socio-economic way of life that functions to stabilize the status quo. To genuinely pursue actual climate solutions and true sustainability, we must seek alternative paths.
References
Altenried, M. (2020). The platform as factory: Crowdwork and the hidden labour behind artificial intelligence. Capital & Class, 44(2). https://doi.org/10.1177/0309816819899410
Althusser, L. (2010). Ideologie und ideologische Staatsapparate (F. O. Wolf, Ed.). VSA. (Original work published 1970).
Apple (Director). (2023, September 12). 2030 Status | Mother Nature | Apple. https://www.youtube.com/watch?v=QNv9PRDIhes
Berman, B. J. (1992). Artificial intelligence and the ideology of capitalist reconstruction. AI & SOCIETY, 6(2), 103 – 114. https://doi.org/10.1007/BF02472776
Brevini, B. (2021a). Creating the Technological Saviour: Discourses on AI in Europe and the Legitimation of Super Capitalism. In P. Verdegem (Ed.), I for Everyone?: Critical Perspectives. University of Westminster Press. https://doi.org/10.16997/book55.i
Brevini, B. (2021b). Is AI Good for the Planet? (1st ed.). Polity.
Brevini, B. (2023). Artificial Intelligence, Artificial Solutions: Placing the Climate Emergency at the Center of AI Developments. In Technology and Sustainable Development. Routledge.
Callon, M. (1984). Some elements of a sociology of translation: Domestication of the scallops and the fishermen of St Brieuc Bay. The Sociological Review, 32(S1), 196 – 233. https://doi.org/10.1111/j.1467-954X.1984.tb00113.x
Campolo, A., & Crawford, K. (2020). Enchanted Determinism: Power without Responsibility in Artificial Intelligence. Engaging Science, Technology, and Society, 6, 1 – 19. https://doi.org/10.17351/ests2020.277
Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2021). A definition, benchmark and database of AI for social good initiatives. Nature Machine Intelligence, 3(2), 111 – 115. https://doi.org/10.1038/s42256-021-00296-0
Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2023). The AI gambit: Leveraging artificial intelligence to combat climate change – Opportunities, challenges, and recommendations. AI & SOCIETY, 38(1), 283 – 307. https://doi.org/10.1007/s00146-021-01294-x
Crawford, K. (2021). Atlas of AI: The Real Worlds of Artificial Intelligence. Yale University Press.
Cukier, K., & Mayer-Schoenberger, V. (2013). The Rise of Big Data: How It’s Changing the Way We Think About the World. Foreign Affairs, 92(3), 28 – 40.
Dhar, P. (2020). The carbon impact of artificial intelligence. Nature Machine Intelligence, 2(8), Article 8. https://doi.org/10.1038/s42256-020-0219-9
D’Ignazio, C., & Klein, L. F. (2020). Data Feminism. The MIT Press.
https://doi.org/10.7551/mitpress/11805.001.0001
Dotan, R., & Milli, S. (2019). Value-laden Disciplinary Shifts in Machine Learning (arXiv:1912.01172). arXiv.
https://doi.org/10.48550/arXiv.1912.01172
Dubber, E. by M. D., Pasquale, F., & Das, and S. (Eds.). (2020). The Oxford Handbook of Ethics of AI. Oxford University Press.
Elish, M. C., & boyd, danah. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs, 85(1), 57 – 80.
https://doi.org/10.1080/03637751.2017.1375130
Elmer, C., & Metz, S. (2022, April 21). „Künstliche Intelligenz wird noch immer stark als Schlagwort benutzt“. Wissenschaftskommunikation.de.
https://www.wissenschaftskommunikation.de/kuenstliche-intelligenz-wird-noch-immer-stark-als-schlagwort-benutzt-57163/
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
Falk, S., & van Wynsberghe, A. (2023). Challenging AI for Sustainability: What ought it mean? AI and Ethics. https://doi.org/10.1007/s43681-023-00323-3
Federici, S. (2004). Caliban and the Witch: Women, the Body and Primitive Accumulation (Illustrated Edition). Autonomedia.
Feenberg, A. (1999). Questioning technology. Routledge.
Feenberg, A. (2005). Critical Theory of Technology: An Overview. Tailoring Biotechnologies, 1, 47 – 64.
Feenberg, A. (2010). Between Reason and Experience: Essays in Technology and Modernity. The MIT Press. https://doi.org/10.7551/mitpress/8221.001.0001
Feenberg, A. (2022). Critical Constructivism: An Exposition and Defense. In D. Cressman (Ed.), The Necessity of Critique: Andrew Feenberg and the Philosophy of Technology (pp. 15 – 38). Springer International Publishing. https://doi.org/10.1007/978-3-031-07877-4_2
Feng, P., & Feenberg, A. (2008). Thinking about Design: Critical Theory of Technology and the Design Process. In P. Kroes, P. E. Vermaas, A. Light, & S. A. Moore (Eds.), Philosophy and Design: From Engineering to Architecture (pp. 105 – 118). Springer Netherlands.
https://doi.org/10.1007/978-1-4020-6591-0_8
Fischer, S., & Puschmann, C. (2021). Wie Deutschland über Algorithmen schreibt. Bertelsmann Stiftung. 10.11586/2021003
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689 – 707. https://doi.org/10.1007/s11023-018-9482-5
Floridi, L., & Nobre, K. (2020, November 30). The Green and the Blue: How AI may be a force for good. The OECD Forum Network. http://www.oecd-forum.org/posts/the-green-and-the-blue-how-ai-may-be-a-force-for-good
Fulterer, R. (2023, November 25). Ex-OpenAI-Manager Zack Kass im Interview: KI, ChatGPT und das ewige Leben. Neue Zürcher Zeitung. https://www.nzz.ch/technologie/kuenstliche-intelligenz-und-open-ai-zack-kass-zu-chatgpt-klimawandel-ki-tech-ld.1765858
Gatti, L. V., Basso, L. S., Miller, J. B., Gloor, M., Gatti Domingues, L., Cassol, H. L. G., Tejada, G., Aragão, L. E. O. C., Nobre, C., Peters, W., Marani, L., Arai, E., Sanches, A. H., Corrêa, S. M., Anderson, L., Von Randow, C., Correia, C. S. C., Crispim, S. P., & Neves, R. A. L. (2021). Amazonia as a carbon source linked to deforestation and climate change. Nature, 595(7867), 388 – 393. https://doi.org/10.1038/s41586-021-03629-6
Goeminne, G. (2013). Science, Technology, and the Political: The (Im)possibility of Democratic Rationalization. Techné: Research in Philosophy and Technology, 17(1), 93 – 123. https://doi.org/10.5840/techne20131716
Google Sustainability. (n.d.). Sustainable Innovation & Technology – Google Sustainability. Sustainability. Retrieved August 14, 2023, from
https://sustainability.google/
Hacking, I. (1982). Biopower and the avalanche of printed numbers. Humanities in Society, 5, 279 – 295.
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99 – 120. https://doi.org/10.1007/s11023-020-09517-8
Hanauska, J. C., Kalwa, N., & Leßmöllmann, A. (2022, December 8). Von hilfreichen Robotern und unkontrollierbaren Maschinen. Diskurse und Narrative zu Künstlicher Intelligenz [Text]. KIT-Zentrum Mensch und Technik - Forschungsblog. https://www.mensch-und-technik.kit.edu/DIskurseKI.php
Hao, K. (2019, June 6). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
Jarrett, K. (2022). Digital Labor (1st ed.). Polity Press.
Jasanoff, S. (2015). Dreamscapes of Modernity. In S. Jasanoff & S.-H. Kim (Eds.), Sociotechnical Imaginaries and the Fabrication of Power (pp. 1 – 33). University of Chicago Press.
https://doi.org/10.7208/9780226276663-001
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), Article 9.
https://doi.org/10.1038/s42256-019-0088-2
Jones, M. L. (2018). How We Became Instrumentalists (Again): Data Positivism since World War II. Historical Studies in the Natural Sciences, 48(5), 673 – 684. https://doi.org/10.1525/hsns.2018.48.5.673
Jones, P. (2021). Work without the worker: Labour in the age of platform capitalism. Verso.
Jonsson, I., & Mósesdóttir, L. (2023). Techno-solutionism Facing Post-liberal Oligarchy. In Technology and Sustainable Development. Routledge.
Joque, J. (2022). Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism. VERSO.
Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T., Hoffman, S. G., Noble, S. U., & Shestakofsky, B. (2021). Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change. Socius, 7, 2378023121999581. https://doi.org/10.1177/2378023121999581
Kalwa, N., & Metz, S. (2022, April 26). Humanoide Roboter und Dystopien: Bilder von KI. Wissenschaftskommunikation.de. https://www.wissenschaftskommunikation.de/humanoide-roboter-und-dystopien-bilder-von-ki-57167/
Kolbert, E. (2021). Under a White Sky: The Nature of the Future. Crown.
Kröger, J. L., Miceli, M., & Müller, F. (2021). How Data Can Be Used Against People: A Classification of Personal Data Misuses. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3887097
Latour, B. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory (1st ed.). Oxford University Press, USA.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4), Article 4.
https://doi.org/10.1609/aimag.v27i4.1904
McQuillan, D. (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence. In Resisting AI. Bristol University Press. https://bristoluniversitypressdigital.com/view/book/9781529213522/9781529213522.xml
Mensah, J. (2019). Sustainable development: Meaning, history, principles, pillars, and implications for human action: Literature review. Cogent Social Sciences, 5(1), 1653531. https://doi.org/10.1080/23311886.2019.1653531
MEW 23. (1962). Marx, K., Das Kapital. Kritik der politischen Ökonomie. Erster Band, Buch I: Der Produktionsprozeß des Kapitals. Dietz Verlag.
Miceli, M., Posada, J., & Yang, T. (2022). Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power? Proceedings of the ACM on Human-Computer Interaction, 6(GROUP), 34:1 – 34:14.
https://doi.org/10.1145/3492853
Morozov, E. (2013). To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist. Penguin.
Mühlhoff, R. (2019). Big Data is Watching You. Digitale Entmündigung am Beispiel von Facebook und Google. In Affekt Macht Netz: Auf dem Weg zu einer Sozialtheorie der digitalen Gesellschaft (pp. 81 – 107). Transcript. https://doi.org/10.14361/9783837644395-004
Mühlhoff, R. (2020a). Automatisierte Ungleichheit. Deutsche Zeitschrift für Philosophie, 68(6), 867 – 890. https://doi.org/10.1515/dzph-2020-0059
Mühlhoff, R. (2020b). Human-aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning. New Media & Society, 22(10), 1868 – 1884.
https://doi.org/10.1177/1461444819885334
Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 102104.
https://doi.org/10.1016/j.ijinfomgt.2020.102104
Noble, S. U. (2018). Algorithms of oppression: Data discrimination in the age of Google. New York University Press.
O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Pinch, T. J., & Bijker, W. E. (1984). The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology might Benefit Each Other. Social Studies of Science, 14(3), 399 – 441. https://doi.org/10.1177/030631284014003004
Plasek, A. (2016). On the Cruelty of Really Writing a History of Machine Learning. IEEE Annals of the History of Computing, 38(4), 6 – 8.
https://doi.org/10.1109/MAHC.2016.43
Porter, T. M. (2020). The Rise of Statistical Thinking, 1820 – 1900. Princeton University Press. https://press.princeton.edu/books/paperback/9780691208428/the-rise-of-statistical-thinking-1820-1900
Powles, J., & Nissenbaum, H. (2018, December 7). The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence. OneZero. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53
Purvis, B., Mao, Y., & Robinson, D. (2019). Three pillars of sustainability: In search of conceptual origins. Sustainability Science, 14(3), 681 – 695. https://doi.org/10.1007/s11625-018-0627-5
Redclift, M. (2005). Sustainable development (1987 – 2005): An oxymoron comes of age. Sustainable Development, 13(4), 212 – 227.
https://doi.org/10.1002/sd.281
Richardson, K., Steffen, W., Lucht, W., Bendtsen, J., Cornell, S. E., Donges, J. F., Drüke, M., Fetzer, I., Bala, G., von Bloh, W., Feulner, G., Fiedler, S., Gerten, D., Gleeson, T., Hofmann, M., Huiskamp, W., Kummu, M., Mohan, C., Nogués-Bravo, D., … Rockström, J. (2023). Earth beyond six of nine planetary boundaries. Science Advances, 9(37), eadh2458.
https://doi.org/10.1126/sciadv.adh2458
Robertson, C. (2021). Documents, empire, and capitalism in the nineteenth century. In A. Blair, P. Duguid, A.-S. Goeing, & A. Grafton (Eds.), Information: A Historical Companion (pp. 152 – 173). Princeton University Press.
Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., Ross, A. S., Milojevic-Dupont, N., Jaques, N., Waldman-Brown, A., Luccioni, A., Maharaj, T., Sherwin, E. D., Mukkavilli, S. K., Kording, K. P., Gomes, C., Ng, A. Y., Hassabis, D., Platt, J. C., … Bengio, Y. (2019). Tackling Climate Change with Machine Learning. arXiv:1906.05433 [Cs, Stat]. http://arxiv.org/abs/1906.05433
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 211 – 252. https://doi.org/10.1007/s11263-015-0816-y
Sadowski, J. (2019). When data is capital: Datafication, accumulation, and extraction. Big Data & Society, 6(1), 2053951718820549.
https://doi.org/10.1177/2053951718820549
Sætra, H. S. (2023). Introduction: The Promise and Pitfalls of Techno-solutionism. In Technology and Sustainable Development. Routledge.
Sartori, L., & Bocca, G. (2023). Minding the gap(s): Public perceptions of AI and socio-technical imaginaries. AI & SOCIETY, 38(2), 443 – 458.
https://doi.org/10.1007/s00146-022-01422-1
Sattiraju, N. (2020, February 4). The Secret Cost of Google’s Data Centers: Billions of Gallons of Water. Time. https://time.com/5814276/google-data-centers-water/
Schuetze, P., & von Maur, I. (2022). Uncovering today’s rationalistic attunement. Phenomenology and the Cognitive Sciences, 21(3), 707 – 728. https://doi.org/10.1007/s11097-021-09728-z
Srnicek, N. (2016). Platform Capitalism (1st ed.). Polity.
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. arXiv:1906.02243 [Cs].
http://arxiv.org/abs/1906.02243
Syed, N. (2023, April 15). The Secret Water Footprint of AI Technology – The Markup. https://themarkup.org/hello-world/2023/04/15/the-secret-water-footprint-of-ai-technology
The White House. (2023, October 30). Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The White House. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Thomson, A., & Bodoni, S. (2020, January 22). Google CEO Thinks AI Will Be More Profound Change Than Fire. Bloomberg.Com.
https://www.bloomberg.com/news/articles/2020-01-22/google-ceo-thinks-ai-is-more-profound-than-fire
Törnberg, P., & Uitermark, J. (2021). Tweeting ourselves to death: The cultural logic of digital capitalism. Media, Culture & Society, 016344372110537. https://doi.org/10.1177/01634437211053766
UK Department for Science, Innovation & Technology. (2023). Capabilities and risks from frontier AI. AI Safety Summit. https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper
van Rooij, I., Guest, O., Adolfi, F. G., de Haan, R., Kolokolova, A., & Rich, P. (2023, August 1). Reclaiming AI as a theoretical tool for cognitive science. PsyArXiv. https://doi.org/10.31234/osf.io/4cbuv
van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213 – 218. https://doi.org/10.1007/s43681-021-00043-6
Verdegem, P. (2021). Introduction: Why We Need Critical Perspectives on AI. In P. Verdegem (Ed.), AI for Everyone? (pp. 1 – 18). University of Westminster Press. https://www.jstor.org/stable/j.ctv26qjjhj.3
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), Article 1.
https://doi.org/10.1038/s41467-019-14108-y
Waelen, R. (2022). Why AI Ethics Is a Critical Theory. Philosophy & Technology, 35(1), 9. https://doi.org/10.1007/s13347-022-00507-5
Wang, H., & Raj, B. (2017). On the Origin of Deep Learning.
https://doi.org/10.48550/arXiv.1702.07800
Whittaker, M. (2021). The steep cost of capture. Interaction, 51 – 55.
https://doi.org/10.1145/3488666
World Commission on Environment and Development. (1987). Our Common Future. Oxford University Press. http://www.un-documents.net/our-common-future.pdf
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (1st edition). PublicAffairs.
Date received: January 2023
Date accepted: February 2024
1 With the terms “narratives,” “visions,” “ideas” or “beliefs,” I describe the small-scale components that make up the hegemonic discourse surrounding AI, such as the corporate narratives of beneficial AI and the political visions of ethically responsible AI development. Benedetta Brevini also uses the term “myths” in this regard (Brevini, 2021a). Crucial here is that narratives, visions, and myths represent concrete instances of the construction of common sense or hegemonic conceptions around AI (as the example of Google captures). These small-scale components correspond to the “socio-technical imaginary” of AI that shapes a society’s collective vision about the relationship between AI technologies and social organization. This, in turn, is embedded in the broader ideology that I call “AI-as-ideology,” which places the imaginary within historically-cultivated socio-economic conditions (see also footnotes 8 and 9).
2 For the general corporate narratives regarding technology and sustainability, see also the latest video by Apple called “Mother Nature,” a prime example of capitalist myth creation (Apple, 2023).
3 The concepts of “sustainability” and of “sustainable development” have often been criticized, for instance for being “rooted in Western colonial capitalist narratives” (Redclift as quoted in Purvis et al., 2019, p. 691; see also Redclift, 2005). While I mostly agree with this criticism, the current paper has a different focus.
4 See van Rooij et al. (2023) for similar distinctions within an understanding of AI in a different context and with a different focus.
5 For a detailed description of this observation see, for instance, the table on p. 9 in Feenberg (1999).
6 For descriptions of AI in the public discourse see, for example, Elmer and Metz (2022), Fischer and Puschmann (2021), Hanauska et al. (2022), and Kalwa and Metz (2022). For examples of how AI is used in the academic discourse see, for example, Floridi et al. (2018), Hagendorff (2020), Jobin et al. (2019), and Waelen (2022).
7 For examples of this, see the recent policy approaches and their focus on responsible innovation. The very framing of “frontier AI” as used in the recent 2023 UK safety summit report underscores the drive towards the future of AI technologies, albeit responsibly. The direction is already set (the policies are geared towards frontier AI), the possibilities need to be harnessed and the harms avoided (The White House, 2023; UK Department for Science, Innovation & Technology, 2023).
8 I adhere to a Marxist perspective in adopting the concept of “ideology,” particularly influenced by Louis Althusser. According to Althusser, ideology describes the system of ideas and conceptions that governs the mindset of a person or a social group in relation to certain real conditions. In other words, ideology serves as a mediator that shapes their understanding of these conditions (Althusser, 2010, p. 71). Different ideologies offer different lenses through which the world is made sense of. Simultaneously, a certain ideology is always connected to certain practices and institutions, and it thus has a material existence by virtue of these characteristics (Althusser, 2010, p. 80). For the concept of AI¬-as-ideology, this means that it describes the hegemonic and (materially) institutionalized set of ideas and conceptions that place AI technologies at the center of current societal organization.
9 The concept of “socio-technical imaginaries,” as apprehended from Sheila Jasanoff’s work (Jasanoff, 2015), functions as a connecting point between the technological and the ideological. On the one hand, it encompasses the narratives, visions, and beliefs surrounding how a concrete technology should be developed. This means that the “socio-technical imaginary” specifically focuses on collective and common-sense assumptions related to a certain technology and its integration into society. On the other hand, the imaginary is tied to the broader concept of ideology that encompasses a wide set of social, political, and cultural ideas and conceptions that mark the historical and socio-economic frame of the imaginary. In this way, the socio-technical imaginary of AI connects the broader ideological frame (AI-as-ideology) to the material conditions of AI technologies (AI-beyond-technology).
10 See Berman (1992) for a detailed perspective on how AI has historically evolved into an ideology.
11 A recent comprehensive study shows that “Earth is now well outside of the safe operating space for humanity” (Richardson et al., 2023). Nonetheless, there is no “actual” change in sight.
12 The belief is not that humans cannot produce solutions to such problems. Instead, there is an excessive focus on technological solutions, a notion perfectly exemplified by a recent interview with former OpenAI manager Zack Kass (Fulterer, 2023): “If you ask Chat-GPT how we can take CO2 out of the atmosphere, the answers are already quite good. Extrapolate that and add the fact that robotics is getting cheaper, and the future is very promising!” This demonstrates how the question is not what society can do but what technological solutions exist and what AI can provide.
13 Although this is also connected to developments in science and bureaucracy, an exploration is beyond the scope of this paper. For such discussion, see, for example, Jones (2018), Porter (2020), and Schuetze and von Maur (2022).
14 This describes the process of the early developments of capitalist modes of organization, regarding the distribution of property as well as the evolution of labor (MEW 23, 1962, Chapter 23)
Metrics
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Paul Schütze (Author)
This work is licensed under a Creative Commons Attribution 4.0 International License.