Though the integration of AI into all aspects of human life is cast as inexorable, critical questions remain about what, if any, choices exist for individuals and collective humanity. The public dispute between the US Department of War and Anthropic over restrictions on use of the company’s technology serves as a stark reminder of the consequential stakes involved in delineating human–AI relationships. Business leaders and learners facing (usually less lethal but still important) decisions about AI use may be unaware of the available options or their varying impacts.
To explore these issues, Western University faculty, researchers, students, and alumni gathered at an April 2 conference on “Agency and AI.” The Human-AI Relationships Working Group at Western’s Rotman Institute of Philosophy hosted the event, setting the tone for a deep exploration of the question “How do you think about ‘agency’ in the context of human–AI relationships?”
Through seven “lightning” talks, scholars in disciplines ranging from philosophy to software engineering presented diverse perspectives on the question. Paul Arnold, research specialist at the Rotman Institute of Philosophy, organized the talks into two broad topics: “Conceptions of Agency” and “Negotiating Agency.” The various expert perspectives converged around certain inquiries and insights that hold promise for reorganizing our relationships with AI.
Meanings of “Agency”
A fundamental challenge for discussing agency in the context of AI is to understand exactly what we mean by “agency.” This complex issue raises numerous related questions, including:
- What does “agency” mean for humans?
- What does “agency” mean for objects such as an AI?
- Does human–AI interaction allow some humans more agency or, worse, assign some more humanity?
Humans
In discussing human agency, Dan Lizotte, an Associate Professor with joint appointments in Computer Science and Epidemiology & Biostatistics at Western, recalled the “agentic state” identified by Stanley Milgram as one where people simply, and often cruelly, “execute the wishes of another person.” He explained that, in contrast to Milgram’s definition, we often think of “agency” as autonomy. This discussion was about more than semantics. Lizotte used the distinction to question the discourse surrounding agentic AI and highlight conflicting understandings of the ultimate hope for the technology.
From the opening land acknowledgement to a remark by Carolyn McLeod (Distinguished Professor of Philosophy and Associate Dean of Research and Graduate Studies for the Faculty of Arts & Humanities) grounded in the feminist ethicist school, voices were clear that all humans do not yet enjoy full autonomy. With big tech companies racing to develop autonomous AI, Lizotte invited the audience to reflect on the implications for human autonomy. Leaders and employees alike will benefit from thoughtful engagement about how much autonomy they may already be ceding through AI use.
Machines
Isam Faik, an Associate Professor of Digital Innovation and Sustainability at Western’s Ivey Business School, described agency as having, among other qualities, a present focus that defines “how we evaluate various possibilities of action, how we judge what’s right and what’s wrong, what’s practically better, what’s normatively better.” With this working definition, Faik questioned why we attribute agency to machines. He challenged those who interact with AI to be mindful of the tendency to see it as exercising the type of judgment that characterizes agency. Faik argued that perceiving AI as having such agency creates the risk of transforming humans from central beneficiaries of business activity into simple means of production. He also explained that this act of attribution matters because, in doing so, we cede power and redistribute responsibility to machines, with a resultant diffusion of accountability.
Participants emphasized the extraordinary asymmetries of wealth and power that exist between individual users and tech leaders. Faik invited the audience to remember that in shifting power to machines we may effectively be giving power to “the people behind those machines… and the people who control the data.” This possibility calls for careful consideration of how AI use may ultimately shift power and decision-making outside our organizations, complicating and disrupting understandings of responsibility, culpability, and accountability in ways the law has not yet foreseen.
Sustaining Human Agency
A central concern with the expansion of AI use has been the potential undermining of human capacities as we become more dependent on machines. In seeking intervention points for organizational leaders and learners striving to sustain their own and their teams’ agency, key challenges are:
- whether humans can preserve the cognitive skills and emotional resilience required to maintain autonomous action;
- whether AI, paradoxically, provides opportunities for individuals to preserve or recapture agency from tech companies; and
- whether these capacities to resist or harness the power of AI can be enjoyed equally or risk creating a new vector of inequality.
Zoe Kinias, an Associate Professor of Organizational Behaviour and Sustainability and the John F. Wood Chair in Innovation in Business Learning at Ivey, highlighted that seemingly benign versions of AI tools can subtly infiltrate our habits in ways that weaken autonomy. She pointed to growing evidence that capabilities for persistence, cognitive capacity, and true human connections that are foundational to autonomous action can fade with AI use, even before agentic AI explicitly takes over human decision-making. Kinias asserted that the wasting impact of AI makes the classically important robust human skills of personal reflection and discernment increasingly essential.
Kinias’s observation connected closely with the warning from Luke Stark, assistant professor in the Faculty of Information and Media Studies, about generative AI as a sophistical technology—one that provides “bits of [apparent] wisdom” but demands a good amount of projective inference from humans for sense-making. The ability to “unmask” the program, according to Kinias, requires people to have the “skills… to be able to be critical in their evaluation of their relationships with the AI.” She also proposed attention to human-human relationships and how people influence each other on AI use.
Like other experts, Kinias insisted greater intentionality is essential in both AI-human and interpersonal relationships, emphasizing: “We want to be very thoughtful about what we let atrophy and what we build in our brains.” She further observed that, especially in the frothy AI space, leaders and learners are “so influenced by each other, even when we don’t necessarily feel it,” requiring intentional design of learning and connection opportunities. The ability of leaders and learners to enjoy the luxury of intentionality may become a differentiating factor in individual and organizational success.
As an antidote to big tech domination, Atrisha Sarkar, an Assistant Professor of Electrical and Computer Engineering at Western, contemplated an AI-driven approach to democratizing technology. Although this apparent embrace of AI’s potential seemed at odds with the general skepticism of other speakers, Sarkar’s view of AI remained measured. After an enlightening review of the evolution of computing developments, Sarkar declared humanity to be in a “post-software era.” She explained that, traditionally, software vendors have decided “what features they offer us… how they implement those features, [and] what algorithms to implement.” Software companies have stripped end users of their agency (meaning power to choose) and forced increasingly terrible products on us. Sarkar imagines a not-so-distant future where end users claim power over AI agents to define their own interactions with software by choosing “their own design and algorithms.”
While AI agents empower the end user to command the software, Sarkar indicated that their use raises new questions and challenges. Echoing other speakers’ concerns, Sarkar highlighted the complexity of determining what “we” delegate to AI agents. Participants alluded to the multiple interpretations of “we” for understanding where decision-making occurs, from an imperial “we” seeking to dominate spheres beyond the tech ecosystem to the collective “we” of humanity. Sarkar situated important decisions about what to delegate as residing at the level of the organization and the individual employee. Developing more equitable access to this understanding can help balance the power to delegate to AI in truly autonomous ways beyond a select few.
Reinventing Human–AI Relationships
Although the express goal of the conference was “to continue the important work of asking good questions,” numerous possibilities for new, more adaptive approaches to human–AI interactions emerged:
- Understand agency and autonomy: Developing a clear understanding of what these concepts mean for humans and how they are being used in the AI context allows for more informed decisions about what can or should be or is already being exchanged in human–AI relationships.
- Accept responsibility: Resisting attributions of agency to AI preserves human deliberation and moral agency.
- Approach AI tools as objects: According to postdoctoral fellow Andrew Richmond, changing the way we conceive of AI—a (potentially manipulative) tool, not a friend—may disrupt our implicit expectations for developing a relationship, especially in the context of LLMs or chatbots, allowing us to be more deliberate about whether and when to engage with it.
- Think about how you’re thinking: Engaging in metacognition can act as a disruptor of the vicious cycle of use and increased dependence created by AI’s atrophying effect. This process requires slowing down and observing ourselves and supports greater intentionality.
- Socialize learning about agency and AI: Creating interactive experiences where learners can give and receive social support and engage with each other in social learning about these topics may provide an antidote to social embeddedness and social system influence creating pressure and false urgency for mindless AI adoption and use.
- Expand access to coding training combined with critical thinking skill-building. Empowering more people to create individualized AI agents may promote widespread recovery of technological autonomy with informed decisions about where and when to delegate to these agents.
These potential interventions are not intended as “tidy answers to these questions.” Instead, the provocative suggestions invite business leaders and educators to infuse their AI strategies with more opportunities for dynamic dialogue that explore educational innovations and sustainable business uses where the use of AI is deemed appropriate, not inevitable.
Cross-disciplinary conversations, such as the Rotman Institute’s April 2 “Agency and AI” conference, provide the opportunity for metacognitive social learning so that we—here meaning humanity—can continue learning and promoting human agency in an increasingly AI world.
The authors would like to thank the JF Wood Centre for Innovation in Business Learning for its support.