Making Choices Rational

The Elective Affinity of Artificial Intelligence and Organizational Decision-Making

Authors

1 Introduction

Decision-making has been the subject of discussion and research across various scientific disciplines and society. In this article, we argue that the decision-making models that appear in the fields of both organizational theory and artificial intelligence are performative (Callon, 2006), shaping the understanding, modeling, and enactment of decision-making across organizations and artificial intelligence (AI) systems. We also trace how decision-making models in organizational theory and artificial intelligence originate from societal ideas and concepts that portray decisions as rational, information-processing calculations designed to achieve specific ends.

This particular understanding of decisions not only shapes these two disciplines but also converges again in the specific phenomenon of AI-based decision-making systems in organizations, a phenomenon that business, organizational, and AI researchers discuss in academic contexts and that manifests in commercial AI tools for organizational decision-making (e.g., Bader & Kaiser, 2019; Trunk et al., 2020; Pomerol & Adam, 2008; Newell & Marabelli, 2015; Jarrahi, 2018; Benbya et al., 2020; Vincent, 2021; Von Krogh, 2018; Prasanth et al., 2023). We conceptualize this interaction between these two fields as an elective affinity. In the tradition of Max Weber, we use the term “elective affinity” to describe the interaction between the phenomena of AI and organizations. Weber employs the concept of elective affinities in various writings to describe interrelationships between social phenomena. Examples include Protestantism with modern capitalism and lifestyle with political class. The specific usage varies significantly (Howe, 1978). We use the term to emphasize that, unlike biological kinship, elective affinity describes a relationship that is neither natural nor inevitable (McKinnon, 2010). The proximity and interaction between the phenomena of AI and organization result from decisions that could have been made differently. Nonetheless, a close bond has developed over time, a co-constitutive interrelationship. We use this term to understand how societal ideas have shaped decision-making models in organization studies and artificial intelligence at least since the mid-20th century, something we consider highly relevant given that these domains now intersect in the prevalent phenomenon that places AI at the core of organizational decision-making.

We use Herbert Simon‘s work as an example of this relationship between decision-making, organizations, and AI, where decision-making is viewed as a rational choice among alternatives based on information-processing. Although Simon established the concept of bounded rationality using empirical research to study the limitations and boundaries of rational decision-making, he based his concepts on the unquestioned premise of rationality. This holds true for his work on not only organizations but also decision-making support systems for organizations (Simon, 1977).

We shed light on the specific characteristics of societal notions around decision-making by contrasting Simon’s studies of organizations and AI with other decision-making models. Comparing Simon’s normative ideas of rationality with alternative theoretical and empirical notions of decision-making from organizational theory reveals that Simon’s notion of normative rationality derives from investigations of both fields. Meanwhile, his work leveraging the concept of bounded rationality in the context of decision support systems for organizations aims to enhance rationality.

We argue that AI represents a contingent technology – that is, it could look different – and we demonstrate why it has developed as it has and not differently. We mainly aim to contribute to two bodies of literature.

First, we refer to research on the social embeddedness of technologies (MacKenzie & Wajcman, 1999), especially AI. Our analysis provides insights that explain how notions of decision-making have been influenced by specific societal ideas in the two fields of interest and how these notions have been amplified in the convergence embodied by AI-based decision-making in organizations. We consider this to be of relevance because AI-based knowledge and outputs have consequences for the social practices they target
(Esposito, 2021; Beer, 2017). This type of knowledge production inherently makes certain aspects of social life invisible (Krasmann, 2020). Additionally, the principle of machine learning that generally relies on vast amounts of aggregated data could even lead to “algorithmic isomorphism” of organizations and their practices (Endacott & Leonardi, 2024, p. 343).
Furthermore, we contribute to research on how technology and software as digital technology is comprehensively shaped by social processes, decisions, and norms (Amewotobla, 2022; Amoore, 2019).

Second, we contribute to research on organizational decision-making1 . By analyzing the specific properties of Herbert Simon’s rational notion of decision-making, we illuminate a particular understanding of what decisions are and how they are made, an understanding that remains dominant in the management literature and present in organizational theory to some extent (Csaszar & Steinberger, 2021). Furthermore, our argument regarding the elective affinity between organizational theory and AI research enables an understanding of the origin of such ideas and the interrelation between AI research, especially technologies that refer to and possibly amplify such a rational notion of decision-making in organizations.

The rest of the paper is structured as follows. First, we introduce the argument regarding the performativity of societal ideas around decision-making (Section 2). Second, we trace the impact of these societal notions by analyzing the specific societal notions of decision-making captured by organizational theory and investigating how decision-making has been similarly conceptualized within AI research (Section 3). We then contrast the findings with alternative concepts of decisions in organizations (Section 4), before demonstrating how the decision-making models of both disciplines converge (academically and practically) in the phenomenon of AI-based decision-making in organizations (Section 5). Finally, we discuss the implications and limitations of our claims and propose future directions for research on this topic (Section 6).

2 The Unquestioned Premises of Rational Decision-Making

We argue that specific societal ideas about what constitutes a decision and how decisions are made influence both organizational theory and AI research. In turn, these disciplines shape conceptualizations of decision-making. Hence, these societal ideas play a performative role in shaping the actual decision-making process (Callon, 2006, 2009), a performativity that promotes an elective affinity between organizational theory and AI research around prevalent rational notions of decision-making and a performativity that ultimately converges in the phenomenon of algorithmic decision-making within organizations.

In this context, performativity captures how scientific disciplines not only describe their subject but also actively shape the reality they study (Callon, 2006, 2009). That is, “both natural and life sciences, along with social sciences, contribute toward enacting the realities that they describe” (Callon, 2006, p. 7). Scientific disciplines accomplish this by influencing the formation of fields through their ideas, models, and technologies based on their insights and assumptions (Callon, 2006, p. 25). In referencing the concept of performativity, we argue that societal ideas and concepts about decision-making diverge into organizational theory and AI research. These societal ideas conceptualize decision-making as an information-processing operation that sees choices made based on rational criteria such as efficiency and optimality. Subsequently, organizational theory and AI research performatively shape decision-making in organizations and by AI tools. Despite the parallel development of both disciplines, the underlying societal ideas about decision-making converge again in the contemporary phenomenon of AI-based decision-making systems in organizations.

We have chosen Herbert Simon’s work on organizational decision-making and AI as a prime example of the elective affinity analyzed in this article. Simon captures this relationship almost ideally. His work in the domains of both organizational decision-making and AI embodies the deep interconnection between these fields. In his seminal contributions to organizational theory, particularly “Administrative Behavior” (1997[1947]) and “Organizations” (2010), Simon explores the factors influencing organizational decision-making and develops the concept of bounded rationality.

Simon’s relevance to AI is equally significant. As a key participant in the 1956 Dartmouth Conference, where the term “artificial intelligence” was coined, Simon, alongside figures such as John McCarthy and Alan Newell, helped establish the foundations of the AI field. His pursuit of symbolic AI, particularly through developing the General Problem Solver (Newell & Simon, 1972; Dick, 2015), sought to codify the implicit rules of human thinking and decision-making. Thus, Simon’s work not only bridges the fields of organizational decision-making and AI but also illustrates how the concepts from one domain have profoundly influenced the other, making him integral to understanding the elective affinity between these disciplines.

3 Decision-Making in Organizations and Artificial Intelligence

This section examines the specific traits of rational decision-making models, using Simon’s ideas about decision-making in organizations and AI as examples. We identify six key properties of societal ideas about decision-making and compare them to alternative views to highlight the specific characteristics of Simon’s rational perspective.

For Simon, decisions are crucial to everyday life and organizations, enabling action in uncertain situations (Simon et al., 1987, p. 11). He emphasizes organizational decisions as they align various actors, activities, and interests within organizations (March & Simon, 2010). Decisions, according to Simon, are fundamental to organizations as social systems of actions: “What is a scientifically relevant description of an organization? It is a description that, so far as possible, designates for each person in the organization what decisions that person makes, and the influences to which he is subject in making each of these decisions” (Simon, 1997[1947], p. 43). Following Simon, putting decisions at the center of studying and analyzing organizations became constitutive for a large part of organization studies (Meier & Meyer, 2020). Understanding individual actions within an organization is closely linked to understanding the decision processes: “Organization behavior is a complex network of decisional processes, all pointed toward their influence upon the behaviors of the operatives” (Simon, 1997[1947], p. 305).

Simon defines a decision as choosing between alternatives (Simon, 1997[1947], p. 3). This process is intentional, rational, and deliberate (Simon, 1972, pp. 162-164). While organizational actors strive for rational decision-making, Simon acknowledges that decisions often involve compromises due to competing goals within organizations (Simon, 1997[1947], p. 5ff). He broadly envisions the decision-making process to comprise three steps: “(1) the listing of all the alternative strategies; (2) the determination of all the consequences that follow upon each of these strategies; (3) the comparative evaluation of these sets of consequences” (Simon, 1997[1947], p. 77). This perspective aligns with classical rational choice models:

Decision makers have rules by which they select a single alternative of action on the basis of its consequences for the preferences. In the most elaborated form of the model, it is assumed that all alternatives, the probability distribution of consequences conditional on each alternative, and the subjective value of each possible consequence are known. (March, 1997, p. 11)

However, Simon challenged a central tenet of the traditional idea of rational decision-making by arguing that neither organizations nor humans can gather and process all the information needed for perfectly rational decisions. This insight precipitated his concept of bounded rationality, which acknowledges the limitations of decision-makers (Simon, 1972, pp. 163-164). Simon’s work focused on understanding these limitations and finding ways to navigate them, particularly through AI-based decision support systems.

Bounded rationality suggests that decisions are not optimized but are instead based on a satisficing criterion – choosing an option that achieves an satisfying outcome rather than the best possible outcome in an ideal world. This approach explains why organizations often settle for satisfactory solutions rather than exhaustive searches for the optimal choice, which can be too costly or time-consuming.

Despite critiquing idealized rationality, Simon maintained that rationality is essential for understanding decision-making in organizations: “A theory of administration or of organization cannot exist without a theory of rational choice” (Simon, 1997[1947], p. 196). A fundamental assumption of Simon regarding organizations is that they shape members’ decisions through various mechanisms, including training, norms, and hierarchical structures. These factors are intended to enhance rationality, even though no decision can be perfectly rational (Simon, 1973b, p. 276; Besio, 2019, p. 6).

Simon’s work on AI is considered foundational, positioning him among the ’founding fathers of AI’ (Schwarz et al., 2022; Pomerol & Adam, 2006). Importantly for this paper, that work is also closely linked to his exploration of decision-making. Within the realm of AI research, Simon contributed significantly to the subdiscipline of symbolic AI. He believed that AI could improve organizational decision-making by formalizing human behavior into rules that computers could execute. This work was driven by a desire to elucidate the implicit rules governing human decision-making and render them explicit (Simon, 1991, pp. 133-134).

Simon considered decision-making to be equivalent to problem solving. His fundamental premise is that navigating from an initial state to a desired goal state through a sequence of operations can be delineated within the so-called problem space (Newell & Simon, 1972, p. 71ff) that “specifies the kinds of objects and phenomena in the problem states, and the kinds of operators that are available for changing one problem state into another” (Simon, 1995, p. 124).

The General Problem Solver (GPS) represents an exemplary manifestation of this concept. One of the first AI programs developed by Simon and Newell, the GPS sought to imbibe implicit human thinking rules into explicit systems. The GPS aimed to address any problem by converting ill-structured problems into well-structured counterparts. The latter are characterized by clearly defined initial and goal states alongside transparent operations (Simon, 1973a, pp. 183-187). Simon envisioned AI as a tool for processing vast amounts of information and enhancing the rationality of decision-making in organizations. The role of AI is envisioned as furnishing organizational members with not only more information but also pertinent information. The foundational premise of the GPS finds application in Simon’s designs for organizational decision-making processes, wherein ill-structured problems faced by organizations are converted into a series of manageable well-structured problems via AI-based information systems (Simon, 1973a, p. 194f). Consequently, such systems are poised to help individual organizational members make more rational decisions.

Simon proposed that human and AI-based information systems could function as hybrid systems, each with specialized capabilities. He believed that decision-making could be formalized into rules that could be executed by either humans or machines. This perspective recognizes both human minds and computer programs as complex information-processing systems and fashions them into the same kind of entity to make them a single object of scientific study (Dick, 2015, pp. 625-629).

In summary, Simon’s work on AI is deeply connected to his studies of decision-making. He sought to understand and improve human decision-making through AI, and his ideas remain relevant today, influencing various forms of AI, including neural networks and deep learning models.

4 Characteristics of Rational Decision-Making

Having outlined Simon’s conception of decision-making in the contexts of organizations and AI, we can now highlight the specific characteristics inherent to this rational notion of decision-making and contrast these characteristics with alternative understandings. Specifically, we consider the following six characteristics:

1) Decisions are made by individuals, including decisions in organizational contexts (IND).

2) Decisions are made in accordance with a logic of consequences through means-end calculations (CON).

3) Decisions deal with and solve organizational problems (PRB).

4) Criteria like efficiency and optimization are paramount for the decision-making process (EFF).

5) Decisions in organizations lead to organizational action (ACT).

6) Making a rational decision implies benefiting the individual decision-maker (IBE).

In Chapter Five, we will demonstrate how these decision-making assumptions persist and perhaps dominate AI software advertising, management literature, organizational studies, and commercial AI tools for organizational decision-making.

4.1 “Decisions Are Made By Individuals” vs. “Decisions Are The Emergent Result Of Organizational Processes” (Burgelman)

Simon posits that decisions are made by individuals. It is the individual who selects the best option among available choices, even within organizations (Simon, 1997[1947], p. 43). Organizational decision-making, according to Simon, remains fundamentally individual decision-making, albeit with the organization providing additional information to individual decision-makers and linking their decision-making processes to enhance overall rationality (Simon, 1973b, pp. 271-272). This means that organizational decision-making remains rooted in individual decision-making processes within the organization, rather than decisions stemming from processes beyond the individual.

In contrast, the work of Burgelman (1991) describes organizational processes evolving from organizational rules and structures. Formal organizational decisions may merely make concrete certain internal changes driven by emergent organizational processes. Burgelman illustrates this with the transformation of Intel, where changes in strategy occurred long before formal decisions were made by management, driven by internal processes and rules Burgleman, 1991, pp. 240-242, 251-252). Simon’s framework does not account for such emergent organizational processes; instead, he assumes that all organizational changes stem from explicit decision-making by individuals within the organization. This perspective reflects the Western societal orientation toward esteeming individual actors (Meyer & Jepperson, 2000).

4.2 “Decisions Are Based On A Logic Of Consequences” vs. “Decisions Are Based On A Logic Of Appropriateness” (March)

For Simon, decisions are guided by a “logic of consequences” (March, 1997, p. 10), wherein choices are made through means-end calculations of anticipated outcomes. The option with the most positive or least negative consequences is typically selected Burgleman, 1991, pp. 240-242, 251-252). The relative importance of criteria may vary based on individual preferences or organizational norms, as already discussed.

Contrary to this perspective, decision-making in organizations can involve various types of rationalities. While means-end calculations represent one approach, an alternative is the “logic of appropriateness” (March, 1997, p. 10), which sees decisions made based on identifying appropriate courses of action. This identification relies on the decision-maker’s role, interpretation of the situation, and assessment of what constitutes an appropriate decision in that context:

Actual decisions in organizations, as in individuals, seem often to involve finding appropriate rules to follow. The logic of appropriateness differs from the logic of consequence. Rather than evaluating alternatives in terms of the values of their consequences, it matches situations and identities. (March, 1997, p. 17)

Considering Max Weber’s categorization of different types of rationalities – ends-based, value-based, traditional, and affective (Kalberg, 1980, pp. 1148-1150) – we observe that Herbert Simon implicitly refers primarily to ends-based rationality, which focuses on selecting choices according to anticipated consequences.

4.3 “Decisions Solve Problems” vs. “Decisions Depend On Choice Opportunities” (Cohen/March/Olsen)

For Simon, decisions serve as a means of addressing organizational problems (Newell & Simon, 1972, pp. 71-75). Problem-solving is the “work of choosing issues that require attention, setting goals, finding or designing suitable courses of action, and evaluating and choosing among alternative actions” (Simon et al., 1987, p. 11). A conceptualization that challenges this perspective on decision-making as problem-oriented is the Garbage Can Model of decision-making developed by Michael Cohen, James March, and Johan Olsen (1972). This model disrupts conventional notions of decision-making by suggesting that decisions do not necessarily follow a linear or orderly trajectory. Instead, decisions often emerge chaotically, driven by various factors (Cohen et al., 1972, pp. 16-17).

The Garbage Can Model considers organizations to be “organized anarchies” (Cohen et al., 1972, p. 1) characterized by problems, solutions, and participants being ‘thrown” into choice opportunities, very much like into a garbage can. Decisions arise from this mixture of problems, solutions, and participants..Instead of resulting from deliberate processes, decisions emerge unpredictably from organizational dynamics. Decision opportunities may arise and dissipate unpredictably, influenced by changing contexts and the shifting attention of organizational participants (Cohen et al., 1972, p. 3).

The three streams of problems, solutions, and participants do not always and systematically meet but instead frequently converge when a choice opportunity arises. These opportunities are not processed systematically. Instead, they are processed chaotically by the various perceived problems, solutions, and competing decision-makers in the choice arena. In this sense, decisions in organizations may be independent of existing problems. In contrast, Simon posits that organizational decision-making represents an intentional and conscious problem-solving process.

4.4 “Efficiency And Optimization As Criteria For The Decision-Making Process” vs. “Criteria As Legitimization Of Decisions Afterwards” (Luhmann)

Efficient information processing and optimized decision outcomes are integral to Simon’s notion of decision-making. These criteria largely define a rational decision-making process in accordance with the logic of consequences. However, viewing rationality from the perspective of Luhmann’s system theory suggests that rationality can be understood as a retrospective construction of the organization rather than a decision-making mode. For Luhmann, organizations are a specific type of social system constituted by the basic operation of communication, with decisions serving as a particular form of communication used to differentiate the organizational system from its environment (Luhmann, 2018, pp. 34-36).

From this perspective, decisions become paradoxical because “decisions can be communicated only if the possibilities that have been rejected are communicated along with them, for otherwise they would not be understood as decisions at all” (Luhmann, 2018, p. 42). This inherent challenge of a decision is addressed by providing reasons for why a particular choice was made, contrasting those reasons with those associated with all abandoned choices. In Luhmann’s system theory, rationality enters the picture as legitimization for decisions after they are made. Efficiency or rational logic does not necessarily inform the decision-making process but rather serves as justification for decisions. This legitimization becomes necessary to defend a specific choice in the face of numerous alternatives. In contrast, Simon sees efficiency and optimization as means of enhancing rationality during the decision-making process.

4.5 “Decisions Lead To Action” vs. “Decisions Can Hinder Actions” (Brunsson)

Simon posits that decisions occur in situations where individuals or organizations need to take action. Although he distinguishes between decision-making and actual action, he does not doubt the direct connection between decisions and actions (Simon, 1997[1947], p. 1). Nils Brunsson’s (1989) research on organizational decision-making challenges this notion by demonstrating that actions are not always linked to prior decisions. In fact, existing organizational ideologies often lead directly to actions.

Brunsson introduces the distinction between decision rationality and action rationality (Brunsson, 1989, p. 27). Decision rationality aligns with traditional notions of rational decision-making, but Brunsson argues that this approach may hinder organizational action. Simply making a decision does not guarantee action; members must also be motivated to act, which is influenced by their expectations and commitment (Brunsson, 1989, p. 19).

In contrast, action rationality focuses on achieving action rather than perfect decision-making. Brunsson suggests that irrationality can be functional if it leads to action, and decisions can be simplified to facilitate action. For example, limiting decisions to two choices – one preferred by decision-makers and one clearly inferior – can increase commitment to the preferred option (Brunsson, 1989, p. 17). Additionally, Brunsson proposes using organizational ideology – a set of ideas and values shared between members – to guide actions, especially in unfamiliar situations. Organizations with a strong ideology may not require extensive decision-making processes; instead, actions are dictated by the ideology (Brunsson, 1989, p. 16). As Brunsson’s work explains,

A powerful organizational ideology cuts down the need for making decisions. It is often obvious what action should be taken. The ideology chooses the action, and no other choice process is needed. If a decision is nonetheless taken, its purpose is to reinforce the willingness to act rather than to reach a choice between possible alternatives. (Brunsson, 1989, p. 17)

In contrast to this perspective, Simon does not challenge the assumption of decisions and action being tightly coupled.

4.6 “Rationality Is About Individual Benefits” vs. “Relational Ideas Of Rationality” (Qin/Nordin, Deleuze/Guattari)

According to Simon’s rational concept of organizational decision-making, rationality primarily centers on the individual benefits derived from calculated means-end analysis. However, alternative perspectives challenge this view, particularly within post-modernist and post-colonial critiques of Western rationality. Yaqing Qin and Astrid Nordin (2019) compare the basic assumptions of the academic literature on international relations, highlighting the prevalent assumption of rationality in Western societies, which view individuals as the basic unit with clear interests and goals. These individuals utilize means-end calculations to make decisions based on the logic of consequences, prioritizing their individual interests (Qin & Nordin, 2019, p. 602).

This analysis prompts Qin and Nordin to present an alternative notion of decision-making rationality that emphasizes relational dynamics:

An actor makes judgments and decisions according to his or her relationships to specific – and often significant – others and the relational context in which these relationships are embedded. In any social setting, what action an actor is to take depends very much on their relationships with significant others and their relations with the relational context in which they are embedded. In short, relations select. (Qin & Nordin, p. 607)

While Qin and Nordin’s depiction of relational rationality does not reject means-end calculations, it underscores the central role of relationships in decision-making, shifting the focus away from individual benefits.

Another departure from means-ends analysis appears in the ideas of Gilles Deleuze and Fèlix Guattari. Deleuze and Guattari critique Western thought for presuming linearity and dualities such as cause-effect or means-ends. Instead, they propose the concept of a rhizome to eschew these dualisms (Deleuze & Guattari, 1987). Furthermore, for Deleuze and Guattari, problem-solving involves an affective dimension, challenging the notion that the mind is the sole pathway to action and problem-solving (Deleuze & Guatarri, 1987, 399). In contrast to these perspectives, Simon assumes that individuals make decisions in organizations based on their individual preferences.

4.7 Summary

Simon’s notion of decision-making in organizations relies on specific yet common (normative) assumptions, characterized by (1) a focus on individuals as decision-makers, (2) a logic of consequences to evaluate alternative choices, (3) a connection between decisions and problem-solving, (4) the adoption of criteria such as efficiency and optimization in the decision-making process, (5) the coupling of decisions with actions that follow, and (6) a focus on the individual benefit of decision-makers. Although prevalent in Western societies, these notions are not necessarily empirically grounded in the ways that decisions are made in organizations, even if they do still shape perceptions of organizational functioning. This leads to the performativity argument, which applies similarly to AI models, as demonstrated in the next chapter using various empirical findings.

5 Automated Decision-Making in Organizations

We hypothesize that the specific rational notions of decision-making in Simon’s work, as identified in both the fields of organizational studies and AI, converge in the phenomenon of AI-based automated decision-making in organizations. We suggest that these particular ideas about rationality remain remarkably stable in society, continuing to influence discussions around the use of AI in organizations, despite the significant shift in the technical foundations of AI from symbolic to sub-symbolic approaches. We employ a three-step approach to investigate this phenomenon. First, we illustrate the narrative properties of advertisements for contemporary AI systems for organizational decision-making. Second, we provide an overview of the current state of research on AI in organizational decision-making in the context of management literature and organizational studies. Finally, we detail the empirical findings derived from our own research2 . We discuss these different sources in terms of the presented properties of a rational notion of decision-making and highlight the consequences this convergence might imply. Interestingly, the specific ideas about rational decision-making can be traced in all instances. Even more importantly, they do not vary between different strands of AI systems. Simon himself was committed to symbolic AI, meaning that he intended to describe the explicit rules of decision-making. Today’s AI systems are mainly sub-symbolic, meaning that the decision-making rules are not expressed explicitly. Such systems are based on either or both machine learning and neural network designs. Nonetheless, this divergent architecture does not come with a change in intended purpose or (ascribed) rationality.

5.1 AI Tools in Advertisements

The notion that AI systems can enhance the rationality of organizational decision-making processes is widely embraced within the realm of AI software products (IND)3 . A prime example of this is Peak AI, a company founded in 2015 that specializes in AI-based solutions, catering to various domains, including customer intelligence, inventory intelligence, and pricing intelligence. In their marketing materials, Peak AI underscores the inherent irrationality of human organizational members and the limitations posed by cognitive biases:

Instead of relying on machines, humans analyzed data to decide everything from what customers to target, to which marketing campaigns were too risky, to how much a new product launch would cost. The issue with leaving every decision to a human is that we are… well, humans! Our emotions creep in, we get stressed, and our cognitive biases (there are over 180 of them) guide our decisions as much as the datasets do. Dr. Jim Taylor, a psychology expert at the University of San Francisco, says that these cognitive biases are simply bad for business. (Peak AI, 2021)

In contrast to the inevitable biases inherent in human decision-makers, AI systems are described as offering a solution by enabling organizations to effectively manage and analyze vast amounts of data to their benefit (IBE). This sentiment aligns with Simon’s assessment, made over 50 years ago:

Data – when it’s collected and analyzed correctly – can give decision-makers deep, unparalleled insight into every part of a business. The problem modern companies face is that they’re drowning in data. Humans can’t keep up. And with so much data being collected, it’s not surprising that processes like spreadsheets and databases just don’t cut it anymore. (Peak AI, 2021)

The specific notion that individual decision-makers in organizations confront various types of information beyond immediate human comprehension is also highlighted by Quantexa in promotional materials for their “intelligence platform.” The British company serves diverse industries (e.g., banking, insurance, telecommunications, and government services) and offers a platform for managing and synthesizing data, identifying patterns, visualizing insights, and providing actionable recommendations. Quantexa underscores the challenges faced by leaders when making decisions in scenarios where complete information is unavailable:

Sometimes a decision needs to be made when all the information isn’t immediately to hand. The gray areas which leaders have to deal with can be stressful, and cause them to make a judgment call that is ill-informed. With AI, these gaps in the data can be finitely assessed and simulated, providing clarity on what the best course of action might be. (Quantexa n.d.)

This example illustrates how decision-making in organizations is narrowly linked to the capacity to navigate uncertainty and take decisive action (ACT). By leveraging AI to assess and simulate gaps in data, Quantexa promises to provide clarity and facilitate informed decision-making, particularly in situations where uncertainties abound. This promise encompasses the idea that decisions in organizations happen in the context of incomplete information but still need to lead to action.

Simon’s perspective on individual decision-making within organizations also manifests in the AI systems promoted for decision-making, as exemplified by Mäd AI, a company founded in 2016 with a focus on website and software design and user interfaces, offering a chatbot based on a large language model designed to automate processes for clients. Mäd AI emphasizes the role of individual decision-makers in organizations:

In the business world, decision-making is often a high-stakes task reserved chiefly for authorized decision-makers. While this varies for each organization, these are typically key stakeholders like CEOs, managers, or project and team leaders. Still, it is impossible to guarantee the success of a business decision, regardless of who was responsible for it. (Mäd AI n.d.)

Such narratives describing AI systems for decision-making promise to enhance the rationality of decisions made by individual decision-makers, claiming that AI tools “identify trends, analyze patterns, and make predictions, leading to better decision-making and ultimately improving business outcomes” (Mäd AI n.d.). This emphasis on individual decision-makers underscores the importance of individual actors in organizational decision-making processes (IBE), positioning AI systems as tools to augment the decision-making capabilities of these individuals rather than supplanting their role in the decision-making process (IND).

5.2 AI Tools in Management Literature

When considering the literature from management disciplines concerning organizational decision-making that incorporates AI, the prevalent notion of rational choices as the foundation for the decision-making process becomes apparent. For instance, Trunk et al. (2020) clearly articulate this rational-choice-based approach, distinguishing between decisions under risk, where consequences are known and calculable, and decisions under uncertainty, where they are not (Trunk et al., 2020, p. 876). This decision-making framework is grounded in the logic of consequences (CON) and guided by criteria such as efficiency and optimization (EFF), which we have discussed as one of the properties of Simon’s understanding, namely, that decisions are based on a computational logic of consequences:

[S]trategic decision-making belongs to the category of decisions under uncertainty. To make the best decision, each alternative is assigned a probability and utility level, and the alternative with the highest weighted value is chosen […] Probability levels are estimates, characterized by coherence, conditionalization, and convergence. (Trunk et al., 2020, p. 881)

According to the authors and their literature review, AI holds the potential to optimize organizational decision-making processes by enhancing the gathering and processing of relevant information, thereby rendering these processes more rational and addressing the concept of bounded rationality as conceived by Simon. This is perceived as a means of amplifying what is implicitly understood as a hallmark of a successful organization, an organization capable of making sound decisions. This idea is deeply embedded in the work of Prasanth et al. (2023, p. 965):

The integration of artificial intelligence in business decision-making has the potential to revolutionize how organizations operate and strategize. By enhancing efficiency, accuracy, and innovation, AI empowers businesses to harness the power of data and make informed decisions in a dynamic and competitive landscape.

Organizations are depicted as systems necessitating decision-making capabilities, with efficient data handling for decision-making deemed essential, if not constitutive, for their success, demanding that this data-handling process be effective and optimized (EFF). The work of Sayyadi (2024) exemplifies this thinking, describing a study based on the unquestioned premise that more data and data-processing capabilities are advantageous for decision-making processes (Sayyadi, 2024, p. 5). This clearly expresses the idea that decision-making is based on a calculation of consequences (CON), which implies that AI can be useful by allowing organizations to extend the limitations of bounded rationality. The notion that there are aspects of organizational decision-making processes where AI capabilities surpass those of humans can be traced back to Simon’s work. Jarrahi (2018) reinforces this idea, suggesting that AI will replace humans in certain organizational processes due to its ability to handle vast amounts of data, thus simplifying complex problem domains (Jarrahi, 2018, p. 5). This assertion is intertwined with the promise that such an approach enables organizations to make decisions that are more rational than comparable approaches devoid of AI. Furthermore, organizations are perceived as systems in which information processing occurs, regardless of the hardware facilitating this processing. This perspective underscores the role of AI as a tool for enhancing decision-making within organizational systems.

5.3 AI Tools in Organization Studies

Turning to the current literature on AI and organizations and organizational decision-making in the context of organization studies, we can identify three themes: Research focused on expert systems that are based on symbolic AI; studies that presume the rationality of decision-making and therefore see AI as advantageous in that regard4 ; studies that highlight the actual organizational practices of implementing AI tools and the ingrained notions of rationality in these projects.

The first identified literature theme concerns AI-based expert systems. Based on symbolic AI – similarly to Simon’s work – these studies transmit the insight that expert systems based on symbolic AI follow ideas and notions of decision-making similar to those that apply to the sub-symbolic models of AI that dominate contemporary AI applications and AI studies. Studies from this first theme conclude that expert systems model a specific social field or domain (Malsch et al., 1996; Rammert, 2000), but such attempts face the challenge of trying to formalize implicit rules and knowledge of these fields. This is considered the major limitation of decision-making processes as modeled in expert systems (Rammert, 2000) and very much aligns with Simon’s own understanding of AI models for organizational decision-making. Expert systems were also considered information processing systems that then also promised to enable the efficient (EFF) solving of any kind of information-based problem, at least in principle (PRB) (Dostal, 1993, p. 66f). However, practical expert systems in organizations also face problems and challenges in terms of actually influencing organizational practices in the manner intended (Rammert et al., 1998). For example, using such systems in medical contexts did not see AI-based expert decisions leading to immediate organizational action (ACT) (Rammert et al., 1998, p. 100).

The second theme concerns AI based on the sub-symbolic AI paradigm. Here, we find very similar ideas on how AI is coupled with rationality and decision-making. The literature encapsulating this theme revolves around the idea that AI is useful for organizations because this technology can amplify rationality. A very special case is the work of Csaszar and Steinberger (2021), who present the argument that organizational theory has imported concepts from AI’s research field. They present an observation that resembles our claim in this article but diverge in the conclusions that they draw to conceptually argue that organizations ought to be understood as intelligent systems, an iteration of organizational theory’s long history of importing concepts from AI research5 (Csaszar & Steinberger, 2021, p. 3). An example of such imports is the idea of the heuristic problem space that organizations need to navigate to find solutions to organizational problems (PRB), an explicit interest of Simon (Csaszar & Steinberger, 2021, p. 11). They also include ideas from sub-symbolic AI, such as modeling decision-processes in a manner analogous to reinforcement learning (Csaszar & Steinberger, 2021, p. 17) and Bayesian networks (Csaszar & Steinberger, 2021, p. 28).

Another branch of research identified under this theme focuses specifically on how AI can be used to increase the efficiency of bureaucratic processes. Newman et al. (2022) specifically link AI to public administrations and posit that this technology has the potential to make these more efficient and formalized (EFF) (Newmann et al., 2022, p. 5). Bullock (2019) and Bullock et al. (2020) come to the same conclusion in the context of case studies about public health insurances and policing, respectively. Elsewhere, Van Rijmenam and Logue (2020) claim that “in practice, AI agents change the nature of organi[z]ational design, decision-making, strategy, knowledge production and learning, power[,] and governance” (Van Rijenam & Logue, 2020, p. 137). They make this claim on the basis of an imagined future in which AI is truly sentient6 (CON, PRB, EFF, ACT) (p. 131). We presume that these inherent notions of decision-making have consequences, as already discussed. In their literature review, Bankins et al. (2024) conclude that AI – and algorithms in general – are perceived to be more objective in decision-making and not subject to the possibly harmful intentionality that apparently compromise human decision-makers (Bankins et al., 2024, pp. 165-166). This closely resembles the findings of the literature review conducted by Lee et al. (2023). As such, these studies support our claim that a certain body of literature within organizations studies understands decision-making as an inherently rational organizational process that is hindered by human limitations. We connect these studies to the principle that a decision is something that is made by processing data and information as a logic of consequences (CON).

The third theme encompasses articles that highlight the presumed notions of rationality in projects aiming to implement AI-based decision-making in organizations. These studies attribute the effects of AI implementation to the actual organizational practices within which AI is embedded. However, they also emphasize that rational notions of decision-making shape empirical projects of AI implementation, at least in terms of expectations during the process. Hermann and Pfeiffer (2023) explore the expectations surrounding AI implementation for predictive maintenance in car manufacturing. Actors in this project assume that individual workers decide how to proceed with AI-based notifications (IND), which typically denote a problem or maintenance suggestion that must be addressed (PRB) (Hermann & Pfeiffer, 2023, p. 1531). Furthermore, project leaders assume that individual responsibility will lead to swift action following a decision. However, the practices empirically observed by the authors reveal that a need for coordination often delays actual action (ACT) (Hermann & Pfeiffer, 2023, p. 1532). Bader and Kaiser (2019) report similar framings of AI in their case study on call centers. The expectation is that call center agents rely on algorithmic predictions that process information such as customer habits and previous purchases, information not available to the agents themselves (Bader & Kaiser, 2019, p. 661). This demonstrates an assumed logic of consequences, calculation, and information processing (CON). Additionally, there exists an expectation that call center agents will act upon the algorithm’s suggestions (ACT) (Bader & Kaiser, 2019, pp. 662-665). In a study of the public employment service in Sweden, Berman et al. (2024) highlight the prevalent belief that AI usage increases efficiency in practice (EFF) and that caseworkers make decisions individually (INT) based on calculations of individual needs (CON) (Berman et al., 2024, pp. 4–8). Elsewhere, Jørgensen and Nissen (2022) evaluate the introduction of AI decision tools in Denmark to calculate the risk of child abuse, an implementation based on the assumption that AI tools will make assessments more efficiently (EFF) and that they will process more information (CON) (Jørgensen & Nissen, 2022, pp. 5-6). Parmiggiani et al. (2024) also observe increased efficiency to be a foundational assumption shaping the implementation of a chatbot for welfare services (Parmiggiani et al., 2024, p. 98).

5.4 AI Tools in Practice

The final stage of the current research is an empirical observation of the implementation of an AI system within an organization.7 Much like the marketing strategies of AI systems and the prevailing research in the fields of management and organizational theory, specific notions of rationality emerged, notions strongly aligned with the ideas discussed in Simon’s work.

The empirical case study considers a health insurance company in the process of implementing an AI-system to automate reimbursement claims from insured persons and doctors’ offices. Clerks decide whether such a claim is within the insurance policy and gets reimbursed or not. The use of AI here is two-fold: Object character recognition (OCR) identifies relevant information on the claims and invoices, including the names of the insured and treated persons, the medical practitioner, and the treatment. Then, this information is automatically checked for consistency based on the contractual agreement between the health insurance company and a doctor’s association. Currently, the automated checking of information by AI is only implemented sporadically, meaning clerks still check the identified information themselves and make decisions based on their knowledge and expertise.

We find that different actors adhere to and promote notions of organizational decision-making processes that are thought to be characterized by efficiency (EFF). AI is considered a technology that can increase the efficiency of organizational processes. One aspect of this increase in efficiency concerns the time that it takes to onboard new employees. Instead of having clerks learn the contractual agreement between the health insurance company and medical associations over months, the new employee interacts with the AI system to take much less time to come to decisions about granting or rejecting reimbursement claims:

Essentially, I can have new employees fully operational within a week, particularly in processing reimbursement claims. Because the AI system handles information verification together with human supervision, employees no longer require the same level of expertise to make accurate decisions as they did before. (Interview, Team Leader of clerks at Insurance Agency)

This quote from the group leader of a team of clerks highlights this relationship between the AI system and a decision-making process that is considered more efficient.

Furthermore, a decision to accept or reject a reimbursement claim is treated as an outcome that should lead to organizational action (ACT). Clerks need to initiate the refunding process as quickly as possible because the health insurance company suffers from an overload of reimbursement claims, which delays the processing of such claims:

They used to have to verify a lot of information manually. With the introduction of our AI system, many of these checks are now automated. Clerks enter the information, and they only need to review it if the software flags a problem. In other words, the process is simplified. They don’t need to know as much or check as much information; they can simply initiate the refunds. (Interview, Team Leader of clerks at Insurance Agency)

A third notion of rationality that this situation captures is the idea that it is the individual clerk who makes a decision in the organizational context of reimbursement claims in the health insurance company (IND). Furthermore, although there is a contractual agreement representing the foundation of the whole process, the work of clerks is seen as a decision based on a judgment about whether a specific case concerns medical treatment that patients are entitled to or not. Such cases are also treated as problem-solving. This is because, in some instances, the clerks are confronted with inconsistent or missing information:

Sometimes, handling reimbursement claims feels like solving a complex problem. When insured individuals fail to submit all necessary information and doctors’ offices use alternative descriptions and abbreviations in their documentation, it can be challenging to know where to begin. (Interview, Clerk at Insurance Agency)

This excerpt highlights how clerks must sometimes resolve issues, which can entail requesting additional information, checking and updating databases, and making judgment calls.

5.5 Summary and Discussion

The societal ideas about decision-making that formed the basis of Simon’s work remain present today. This is particularly true for the discourse on and the practical use of AI tools in organizations. We have shown how these ideas remain ingrained in many contemporary descriptions and advertisements of AI tools, as well as in the management literature on AI-based decision-making in organizations and organization studies research. We have also demonstrated the impact of these ideas through a case study considering the implementation of AI in health insurance organizations. These ideas include individual decision-making based on information processing and the logic of consequences based on evaluation criteria such as efficiency and optimization. The convergence of AI and organizational decisions amplifies these aspects, reinforcing the view of decisions as rational processes aimed at addressing perceived problems and achieving desired outcomes.

This convergence matters because the idea of a rationalist approach to organizational decision-making is performative in terms of adopted from Callon (2006, 2009). Automated decision-making tools for organizations based on a logic of consequences promote a specific understanding of how decisions are to be understood and can shape the organizational members and their decision-making practices in that regard. We have already highlighted how organizational decisions can be understood in very different lights in the previous sections and how a purely rationalist approach reduces dimensions of sociality and politicality from organizations. Additionally, such an unquestioned premise does not facilitate critical questioning of what kind of knowledge such probabilistic and positivistic models can actually generate. This aligns with research focused on the epistemological limitations of algorithmic outputs such as the “logic of the surface” (Krasmann, 2020, p. 2098) or studying the turn from statistical predictions to algorithmic predictions as individualized, opaque, and performative “divinatory procedures” (Esposito et al., 2024, p. 14).

Our argument is not, and cannot be, that the prevailing emphases (e.g., individual decisions over organizational decisions, logic of consequence over logic of appropriateness) are inherently wrong and that the opposite should always be chosen. Rather, the argument is that the alternative perspectives are often overlooked or not even considered. In such cases, decisions are no longer made actively; instead, specific approaches are taken for granted, and alternatives are not recognized. As such, the practical solution is not to automatically choose the opposite side of these alternatives (e.g., organizational decisions instead of individual decisions, logic of appropriateness instead of logic of consequence). Instead, it is necessary to consider the situation, goals, and organizational context to determine which forms of decision-making are most appropriate and should be implemented.

This presents a particular challenge, especially for tools and software based on sub-symbolic AI. Although the kind of symbolic AI developed by Simon explicitly formulates rules, this is not the case for sub-symbolic AI. Consequently, making specific interventions or ensuring traceability is significantly more challenging.

The findings presented also suggest that there is no difference between symbolic AI and sub-symbolic AI in terms of the framing of rationality that the specific models are expected to guarantee and increase, even though they are based on very different technological foundations and concepts. Regardless of whether we consider expert systems or neural networks, the notions of rationality that we have highlighted in Simon’s work are observable in the analyzed narratives observed in marketing materials, literature from management studies and organizations studies, and our own empirical findings concerning the implementation of AI at a health insurance company.

6 Conclusion

In this article, we have argued that decision-making models appearing in organizational theory and computer science significantly influence the actual decision-making of AI models and organizations. In turn, these models are shaped by a set of societal ideas about decision-making. We understand this connection as an elective affinity – that is, a social rather than natural interaction and proximity – between organizational theory and AI research. Drawing on Callon’s notion of performativity (2006, 2009), we posit that these disciplines not only shape their respective fields but converge again in the context of AI-based decision-making within organizations.

We analyze these societal notions and identify a rational choice-based idea of decision-making, characterized by information processing and the logic of consequences. Herbert Simon’s work serves as an example of this nexus, illustrating six key assumptions: (1) Decisions are made by individuals, including in organizational contexts; (2) Decisions follow a logic of consequences through means-end calculations; (3) Decisions address and solve organizational problems; (4) Efficiency and optimization are crucial criteria; (5) Decisions prompt organizational action; and (6) Rational decisions benefit the individual decision-maker. These assumptions are often taken for granted and not specifically reflected by the discourse on AI-based decision-making tools.

Some organizations rely on algorithmic decision-making, embodying these assumptions and serving as near-ideal-type examples of the described convergence. Examples include ride-hailing or food delivery services, where algorithmic systems determine routes and assign tasks based on Simon’s characteristics. Our argument intersects with studies analyzing domination and emancipation tendencies in workplace digitalization projects (Meyer et al., 2019), positioning AI as a focal technology.

By examining the narratives appearing in advertisements for AI decision-making tools, surveying the state of research in management literature and organization studies and considering an empirical example of AI decision-making at a health insurance company, we propose that AI-based decision-making amplifies this rational notion of decision-making as information-processing in organizations. Building on studies showing how digital technologies are shaped by social processes (Amewotobla, 2022; Amoore, 2019; Beer, 2017), we have shown how ideas of rationality are constitutive for the development and interpretation of this technology.

Our analysis offers an explanation that can help understand why AI-based decision-models look the way they do. Interestingly, our findings indicate that these ideas about rational decision-making do not change substantially over time, despite significant changes to the underlying technology. The idea that AI serves to amplify a specific notion of rationality is ingrained in the paradigms of symbolic AI associated with the work of Simon and the sub-symbolic AI prevalent in today’s AI software. Furthermore, AI as a specific technology is influenced heavily by additional feedback loops due to learning processes using training data. Further empirical research is needed to understand the impact of AI-based technologies on organizations, members, and practices. In particular, examining organizations not initially designed for AI-based decision-making may reveal dynamics that challenge existing rationales and taken-for-granted assumptions. We advocate for further empirical research based on our analysis of Simon’s work and illustrative examples from advertising materials for AI tools and the extant management literature. This further research should inquire about the consequences of introducing AI-based decision-making systems into organizations with specifically rational decision-making models.

References

Ametowobla, D. (2022). Zur Soziologie der Software. Springer. https://doi.org/10.1007/978-3-658-37256-9

Amoore, L. (2019). Doubt and the algorithm: On the partial accounts of machine learning. Theory, Culture & Society, 36 (6), 147–169. https://doi.org/10.1177/0263276419851846

Andersen, E. S., Dysvik, A., & Vaagaasar, A. (2009). Organizational rationality and project management. International Journal of Managing Projects in Business, 2 (4), 479–498. https://doi.org/10.1108/17538370910991106

Anthony, C. (2021). When knowledge work and analytical technologies collide: The practices and consequences of black boxing algorithmic technologies. Administrative Science Quarterly, 66 (4), 1173–1212. https://doi.org/10.1177/00018392211016755

Avgerou, C., & McGrath, K. (2007). Power, rationality, and the art of living through socio-technical change. MIS Quarterly, 31 (2), 295–315. https://doi.org/10.2307/25148792

Bader, V., & Kaiser, S. (2019). Algorithmic decision-making? The user interface and its role for human involvement in decisions supported by artificial intelligence. Organization, 26 (5), 655–672. https://doi.org/10.1177/1350508419855714

Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 45 (2), 159–182. https://doi.org/10.1002/job.2735

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20 (1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147

Benbya, H., Davenport, T., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19 (4).

Berman, A., de Fine Licht, K. I., & Carlsson, V. (2024). Trustworthy AI in the public sector: An empirical analysis of a Swedish labor market decision-support system. Technology in Society, 76 , 1–15. https://doi.org/10.1016/j.techsoc.2024.102471

Besio, C. (2019). Entscheidungstheorien. In M. Apelt et al. (Eds.): Handbuch Organisationssoziologie (pp. 1-19). Springer.
https://doi.org/10.1007/978-3-658-15953-5_7-1

Bullock, J. B. (2019). Artificial intelligence, discretion, and bureaucracy. The American Review of Public Administration, 49 (7), 751–761. https://doi.org/10.1177/0275074019856123

Bullock, J. B., Young, M. M., & Wang, Y. (2020). Artificial intelligence, bureaucratic form, and discretion in public service. Information Polity, 25, 491–506.

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3 (1), 1–12. https://doi.org/10.1177/2053951715622512

Burgelman, R. A. (1991). Intraorganizational ecology of strategy making and organizational adaptation: Theory and field research. Organization Science, 2 (3), 239–262.

Brunsson, N. (1989). The Organization of Hypocrisy: Talk, Decisions, and Actions in Organizations. John Wiley & Sons Inc.

Callon, M. (2006). What does it mean to say that economics is performative?, CSI Working Paper Series 005, Centre de Sociologie de l’innovation (CSI), Mines ParisTech, https://shs.hal.science/halshs-00091596

Callon, M. (2009). Elaborating the notion of performativity. Le Libellio d’AEGIS, 5 (1), 18–29.

Cabantous, L., Gond, J.-P., & Johnson-Cramer, M. (2010). Decision theory as practice: Crafting rationality in organizations. Organization Studies, 31 (11), 1531–1566. https://doi.org/10.1177/0170840610380804

Cabantous, L., & Gond, J.-P. (2011). Rational decision-making as performative praxis: Explaining rationality’s éternel retour. Organization Science, 22 (3), 573–586.

Cohen, M., March, J., & Olsen, J. (1972). A garbage can model of organizational choice. Administrative Science Quarterly, 17 (1), 1–25.

Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4 (2), 1–14. https://doi.org/10.1177/2053951717718855

Csaszar, F., & Steinberger, T. (2021): Organizations as artificial intelligences: The use of artificial intelligence analogies in organization theory. Academy of Management Annals. https://ssrn.com/abstract=3834459

Dabbous, A., Aoun Barakat, K., & Merhej Sayegh, M. (2022). Enabling organizational use of artificial intelligence: An employee perspective. Journal of Asia Business Studies, 16(2), 245–266. https://doi.org/10.1108/JABS-09-2020-0372

Deleuze, G., & Guattari, F. (1987). A Thousand Plateaus: Capitalism and Schizophrenia. Bloomsbury Academic.

Dick, S. (2015). Of models and machines: Implementing bounded rationality. Isis, 106(3), 623–634. https://doi.org/10.1086/683527

Dostal, W. (1993). Expertensysteme und Beschäftigung. In Mitteilungen aus der Arbeitsmarkt- und Berufsforschung, 26 (1), 63–77.

Endacott, C. G., & Leonardi, P. M. (2024). Chapter 19: Artificial intelligence as a mechanism of algorithmic isomorphism. In I. Constantiou et al. (Eds.),: Research Handbook on Artificial Intelligence and Decision Making in Organizations (pp. 342-358). Edward Elgar Publishing. https://doi.org/10.4337/9781803926216.00029

Esposito, E. (2021). Transparency versus explanation: The role of ambiguity in legal AI. Journal of Cross-Disciplinary Research in Computational Law, 1 (2). Retrieved from https://journalcrcl.org/crcl/article/view/10

Esposito, E., Hofmann, D., & Coloni, C. (2024): Can a predicted future still be an open future? algorithmic forecasts and actionability in precision medicine. History and Theory, 63 , 4–24. https://doi.org/10.1111/hith.12327

Faulconbridge, J. R., Sarwar, A., & Spring, M. (2024). Accommodating machine learning algorithms in professional service firms. Organization Studies, 45 (7). https://doi.org/10.1177/01708406241252930

Gualdi, F., & Cordella, A. (2024). Chapter 15: Artificial intelligence to support public sector decision-making: the emergence of entangled accountability. In I. Constantiou et al. (Eds.), Research Handbook on Artificial Intelligence and Decision Making in Organizations (pp. 266–281). Edward Elgar Publishing. https://doi.org/10.4337/9781803926216.00024

Haesevoets, T., De Cremer, D., Dierckx, K., & Van Hiel, A. (2021). Human-machine collaboration in managerial decision making. Computers in Human Behavior, 19 , 3–11. https://doi.org/10.1016/j.chb.2021.106730

Herrmann, T., Pfeiffer, S. (2023). Keeping the organization in the loop: a socio-technical extension of human-centered artificial intelligence. AI & Soc, 38 , 1523–1542. https://doi.org/10.1007/s00146-022-01391-5

Howe, R. H. (1978). Max Weber’s elective affinities: Sociology within the bounds of pure reason. American Journal of Sociology, 84 (2), 366–385.

Jarrahi, M. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61 (4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007

Jørgensen, A. M., & Nissen, M. A. (2022). Making sense of decision support systems: Rationales, translations and potentials for critical reflections on the reality of child protection. Big Data & Society, 9 (2). https://doi.org/10.1177/20539517221125163

Kalberg, S. (1980) Max Weber’s types of rationality: Cornerstones for the analysis of rationalization processes in history. American Journal of Sociology, 85 , 1145–1179. https://doi.org/10.1086/227128

Krasmann, S. (2020). The logic of the surface: On the epistemology of algorithms in times of big data. Information, Communication & Society, 23 (14), 2096–2109. https://doi.org/10.1080/1369118X.2020.1726986

Lee, M., Scheepers, H., Lui, A., & Ngai, E. (2023). The implementation of artificial intelligence in organizations: A systematic literature review. Information & Management, 60 (5), 1–19. https://doi.org/10.1016/j.im.2023.103816

Luhmann, N. (2018). Organization and Decision. Cambridge University Press. https://doi.org/10.1017/9781108560672

MacKenzie D. A., & Wajcman J. (1999). The social shaping of technology (2nd ed.). Open University Press.

Malsch, T., Florian, M., Jonas, M., & Schulz-Schaeffer, I. (1996). Expeditionen ins Grenzgebiet zwischen Soziologie und Künstlicher Intelligenz. Künstliche Intelligenz, 2 (96), 1–12.

March, J. (1997). Understanding how decisions happen in organizations. In Z. Shapira (ed.), Organizational decision making (pp. 9–32). Cambridge University Press.

March, J., & Simon, H. (2010). Organizations (2nd ed.). Blackwell.

Mäd AI (n.d.). Using AI for intelligent decision-making. Retrieved September 15, 2023, from https://www.mad.co/en/insights/using-ai-for-intelligent-decision-making

McKinnon, A. M. (2010). Elective affinities of the Protestant ethic: Weber and the chemistry of capitalism. Sociological Theory, 28 (1), 108–126.

Meier, F., & Meyer, U. (2020). Organisationen und heterogene Umwelten: Zum Umgang mit Fragen institutioneller Pluralität. In R. Hasse & A. Krüger (Eds.), Neo-Institutionalismus: Kritik und Weiterentwicklung eines sozialwissenschaftlichen Forschungsprogramms (pp. 75–100). transcript. https://doi.org/10.14361/9783839443026-005

Meyer, J., & Jepperson, R. (2000). The “actors” of modern society: The cultural construction of social agency. Sociological Theory, 18 (1), 100–120. https://doi.org/10.1111/0735-2751.00090

Meyer, U., Schaupp, S., & Seibt, D. (Eds.). (2019). Digitalization in industry: Between domination and emancipation. Palgrave Macmillan.

Newell, A., & Simon, H. (1972). Human problem solving. Prentice Hall.

Newell, S., & Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of “datification.” The Journal of Strategic Information Systems, 24 (1), 3–14. https://doi.org/10.1016/j.jsis.2015.02.001

Newman, J., Mintrom, M., & O’Neill, D. (2022). Digital technologies, artificial intelligence, and bureaucratic transformation. Futures, 136, 1–11.https://doi.org/10.1016/j.futures.2021.102886

Parmiggiani, E., Vassilakopoulou, P., & Pappas, I. (2024). Chapter 5: Reconfiguring human-AI collaboration: integrating chatbots in welfare services. In I. Constantiou et al. (Eds.), Research Handbook on Artificial Intelligence and Decision Making in Organizations. Edward Elgar Publishing. https://doi.org/10.4337/9781803926216.00013

Peak AI (2021, August 23). AI decision-making. The future of business intelligence. PeakAI. Retrieved September 15, 2023, from https://peak.ai/hub/blog/ai-decision-making-the-future-of-business-intelligence/

Pomerol, J., & Adam, F. (2006). On the legacy of Herbert Simon and his contribution to decision-making support systems and artificial intelligence. In J. Gupta, G. Forgionne, & M. Mora (Eds.), Intelligent decision-making support systems. Decision Engineering (pp. 25–43). Springer. https://doi.org/10.1007/1-84628-231-4_2

Pomerol, J., & Adam, F. (2008). Understanding human decision making: A fundamental step towards effective intelligent decision support. In G. Phillips-Wren et al. (Eds.), Intelligent decision making: An AI-based approach (Vol. 97, pp. 3–40). Springer. https://doi.org/10.1007/978-3-540-76829-6_1

Prasanth, A., Vadakkan, D. J., Surendran, P., & Thomas, B. (2023). Role of artificial intelligence and business decision making. International Journal of Advanced Computer Science and Applications (IJACSA), 14 (6), 965–969. http://dx.doi.org/10.14569/IJACSA.2023.01406103

Qin, Y., & Nordin, A. (2019). Relationality and rationality in Confucian and Western traditions of thought. Cambridge Review of International Affairs, 32 (5), 601–614. https://doi.org/10.1080/09557571

Quantexa (n.d.). The role of AI in decision making: A business leader’s guide.. Retrieved September 15, 2023, from https://www.quantexa.com/education/the-role-of-ai-in-decison-making/

Rammert, W., Schlese, M., Wagner, G., Wehner, J., & Weingarten, R. (1998). Wissensmaschinen: Soziale Konstruktion eines technischen Mediums. Das Beispiel Expertensysteme. Campus.

Rammert, W. (2000). Nicht-explizites Wissen in Soziologie und Sozionik: ein kursorischer Überblick. (TUTS - Working Papers, 8-2000). Berlin: Technische Universität Berlin, Fak. VI Planen, Bauen, Umwelt, Institut für Soziologie Fachgebiet Techniksoziologie. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-10555

Rudko, I., Bonab, A., & Bellini, F. (2021). Organizational structure and artificial intelligence. Modeling the intraorganizational response to the AI contingency. Journal of Theoretical and Applied Electronic Commerce Research, 16 (6), 2341–2364. https://doi.org/10.3390/jtaer16060129

Sayyadi, M. (2024). How to improve data quality to empower business decision-making process and business strategy agility in the AI age. Business Information Review, 41 (3). https://doi.org/10.1177/02663821241264705

Schwarz, G., Christensen, T., & Zhu, X. (2022). Bounded Rationality, satisficing, artificial intelligence, and decision-making in public organizations: The contributions of Herbert Simon. Public Admin Rev, 82 , 902–904. https://doi.org/10.1111/puar.13540

Simon, H. (1972). Theories of bounded rationality. In C. B. McGuire & R. Radner (Eds.), Decision and Organization (pp. 161–176). Elsevier.

Simon, H. (1973a). The structure of ill structured problems. Artificial Intelligence, 4 (3–4), 181–201.

Simon, H. (1973b). Applying information technology to organization
design. Public Administration Review, 33 (3), 268–278. https://doi.org/10.2307/974804

Simon, H. (1977). Artificial intelligence systems that understand. In Proceedings of the 5th International Joint Conference on Artificial Intelligence – Volume 2 (IJCAI’77) (pp. 1059–1073). Morgan Kaufmann.

Simon, H. (1991). Artificial Intelligence: Where has it been, and where is it going? IEEE Transaction on Knowledge and Data Engineering, 3 (2), 128–136.

Simon, H. (1997) [1947]. Administrative behavior: A study of decision-making processes in administrative organizations. 4th ed. Free Press.

Simon, H., Dantzig, G., Hogarth, R., Plott, C., Raiffa, H., Schelling; T.,
Shepsle, K., Thaler, R., Tversky, A., & Winter, S. (1987). Decision making and problem solving. Interfaces, 17(5), 11–31. http://dx.doi.org/10.1287/inte.17.5.11

Trunk, A., Birkel, H., & Hartmann, E. (2020). On the current state of combining human and artificial intelligence for strategic organizational decision making. Bus Res, 13 , 875–919. https://doi.org/10.1007/s40685-020-00133-x

van Rijmenam, M., & Logue, D. (2020). Revising the “science of the organization”: Theorising AI agency and actorhood. Innovation, 23 (1), 127–144. https://doi.org/10.1080/14479338.2020.1816833

Vincent, V. (2021). Integrating intuition and artificial intelligence in organizational decision-making. Business Horizons, 64 (4), 425–438. https://doi.org/10.1016/j.bushor.2021.02.008

Von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for phenomen-based theorizing. Academy of Management Discoveries, 4 (4), 404–409. https://doi.org/10.5465/amd.2018.0084

Date received: February 2024

Date accepted: August 2024


  1. 1 Multiple studies stress that the effects of the implementation of AI cannot be explained by considering only the AI system itself; instead, these effects result from how AI is embedded within organizational practices by organization members (Rudko, 2021; Dabbous et al., 2022; Christin, 2017; Haesevoets et al., 2021; Faulconbridge et al., 2024; Anthony, 2021; Gualdi & Cordella, 2024). Expectations that depict AI systems as a specific kind of rational machine remain prevalent in such empirical projects, as we demonstrate later in this article.

  2. 2 These findings derive from the ongoing Ph.D. project of René Werner, which sees him use a qualitative case study approach to understand how health insurance organizations use machine-learning systems to automate reimbursement processes from insured persons and how the implementation changes the organizational practices of reimbursement claims. The idea of the AI application is to use AI to process all the relevant information from such claims (i.e., insured person, attending physician, medical procedure, and cost) to automatically decide whether a claim is an event covered by the insurance or not. Using qualitative interviews and short ethnographic observations, he studies how the developing AI system and the organizational practices change each other.

  3. 3 In this chapter we indicate the previously emphasized six properties of the rationalist notion of decision-making by indexing the corresponding abbreviations in parentheses. This approach should make following our argument more accessible.

  4. 4 Rational choice perspectives have not been the dominant paradigm in organization studies. However, when it comes to software design and change management in general, rational choice perspectives remain widely (and often implicitly) used and employed by both academic researchers (Avgerou & McGrath, 2007; Cabantous et al., 2010) and practitioners (Andersen et al., 2009; Cabantous & Gond, 2011).

  5. 5 Although we share their observation about the parallelism of ideas in organizational theory and AI research, we do not share their normative conclusions that indicate a need to understand organizations as rational, intelligent systems (akin to common conceptualizations AI systems).

  6. 6 We agree with these authors that a specific idea of rationality that is ingrained in AI models has effects and impacts organizations. However, we do not think that their imagined future for AI is plausible nor necessary for this claim. Our main argument is that AI systems performatively and currently impact organizations via their specific ideas around what constitutes a decision.

  7. 7 These findings are based on René Werner’s Ph.D. project.

Metrics

Metrics Loading ...

Downloads

Published

18-10-2024

Issue

Section

Research Papers