Leadership Imperatives in an AI World

As a consultant and academic focused on large-scale change and digital disruption (as I write this, I am in the process of moving to Ivey Business School after 25 years at London Business School), I have keenly observed the scrambling to figure out how generative artificial intelligence (AI) will affect our working lives sparked by the arrival of ChatGPT in late 2022.

In the press, much of the focus has been on the impact on employment, with conflicting headlines ranging from the “AI could replace equivalent of 300 million jobs” that was attached to a BBC report to the “AI Will Not Eliminate Jobs” that topped a column by Forbes contributor Shep Hyken. Estimates of potential gains on the productivity front have also varied significantly, ranging from a whopping 3–4 per cent increase in annual GDP growth to a paltry 0.066 per cent annual increase in productivity.

Roy Amara, cofounder of Palo Alto’s Institute for the Future, argued “we overestimate the effect of a technology in the short run and we underestimate the effect in the long run.” That could be what’s happening with AI. As things stand, I think the near-term impact on jobs and productivity will be small, at the lower end of the ranges noted above. But in the long run, who knows?

The truth is that we have no clear sense of how the AI revolution will play out. As recently pointed out in The New York Times DealBook newsletter, a lot of the economic impact will depend on the so-called productivity paradox, meaning whether or not AI will snap the relatively lacklustre streak of technology-generated productivity gains that has confused economists for years. Either way, given the current speed of change, we cannot rule out the scary predictions of futurist Ray Kurzweil—author of the 2005 bestseller The Singularity Is Near and its 2024 sequel, The Singularity Is Nearerwho sees a merger of artificial and human intelligence in our lifetimes.

So, we face an unknown future. But while we can’t know what to expect, we can plan for an uncertain future however fast it plays out.

In this piece, I share reflections on what business leaders should be doing as AI embeds itself deeper in our day-to-day working lives. In some ways, this article offers timeless advice because it is about how businesses should adapt in the face of technological change—and technologies are always changing. That said, the arrival of generative AI has arguably thrown up some new challenges, especially in the world of professional work, which I will also address while bringing the qualities of effective leadership into sharp relief.

This paper separates the external and internal aspects of organizational leadership. Externally, leaders have a strategic imperative to create and maintain a distinctive value proposition in the face of strong AI-enabled forces for convergence. Internally, leaders have a moral imperative to ensure that workers continue to have worthwhile and meaningful jobs in an increasingly algorithm-controlled working environment. It goes without saying that these are both difficult and important—that’s why I call them imperatives. Following through on them will require conscious effort, along with courage and tenacity. That’s what leadership is all about.

THE STRATEGIC IMPERATIVE: A Distinctive Value Proposition

The hallmark of any successful company is distinctiveness—a brand or offering that stands out from the crowd. Distinctiveness gets you talked about, drives demand, and it usually commands a premium price. In a fiercely competitive world, companies that dare to be different capture our admiration (think Apple, Red Bull, Tesla, or Virgin).

Distinctiveness rarely lasts. Successful innovators attract imitators. As they grow, innovators also typically take on traits of the companies that they sought to define themselves against. Tesla is a case in point. After growing from a maverick upstart to a mainstream player in less than a decade, its once-unique offerings are converging on those coming from traditional majors such as GM or VW.

While strategic convergence occurs through executive decision-making, it is enabled by digital technology. Easy access to customer trend data and competitor information allows rapid benchmarking. Outsourcing of underlying technologies to a limited number of top-tier suppliers leads to the commoditization of components and features.

But generative AI pushes competitive convergence further in ways that are not necessarily strategic. Imagine you are asking ChatGPT to suggest ways of innovating in your industry—perhaps with a new product design or an untapped market segment. I have done this in my area of expertise many times, and many of you will have done likewise. It typically gives you some pretty interesting suggestions. But it also offers reasons to pause before getting too excited.

As an OpenAI executive noted at a London Business School conference last year, generative AI “excels at mediocrity.” Keep in mind that the answers you get from ChatGPT are generated from the body of data on which it was trained, which makes truly out-of-the-box ideas unlikely. Its attempt to give you the best-possible answer to your question also steers it towards a modal response, not one in the tail end of a distribution curve. So, if you want a truly exceptional strategy, you need to look elsewhere.

Simply put, if the digital revolution has already pushed competitors to converge in how they act (how they make and distribute their products), the GenAI revolution is now pushing competitors to converge in how they think (the ideas they come up with and develop). The net result is likely to be further reversion to the mean—towards lowest-common-denominator solutions that no one is excited about.

So, what’s the solution?

Leadership as always is about taking on the difficult challenge and charting a way forward that the less courageous decline to take. The point here is that this is a fundamental human endeavour that AI can’t help you with. To ensure your organization remains competitive in the future, here are the qualities you need:

Imagination: Human imagination is a wonderful thing, impossible to manage or replicate. Researchers have sought to reverse-engineer the way entrepreneurs like Steve Jobs and Richard Branson came up with their business ideas, but without much success. Ultimately, we have to accept that their insights were intuitive and somewhat random. Which is just as well really, because if it were possible to codify the thought process of Steve Jobs, it would be folded into the next generation of AI algorithms, and its uniqueness would be lost.

As Giles Hedger, former head of strategy at Leo Burnett, put it years ago in a column about the impact of technological innovation on his industry:

Marketing… will never be about reductive computation. It will never owe as much to the computer that cracked the Enigma code and began the digital age as it does to the expansive power of ideas. Because marketing is not a riddle of elimination but a test of the imagination, and there is no algorithm or app for that.

Unreasonableness: A close bedfellow of imagination is what George Bernard Shaw called unreasonableness—the capacity to believe that you are right and everyone else is wrong. It takes an unusual mix of personal qualities to pursue an idea that everyone around you says is crazy. You need to be smart, creative, stubborn, resilient, and impervious to criticism. Elon Musk is today’s prime example of executives with this trait, with SpaceX and Tesla as testaments to the power of unreason. But Jeff Bezos is a close second thanks to the “willingness to be misunderstood” that he has sought to instil in Amazon leaders. As he noted during a 2015 shareholder meeting:

Any time you do something big, that’s disruptive… there will be critics. And there will be at least two kinds of critics. There will be well-meaning critics who genuinely misunderstand what you are doing or genuinely have a different opinion. And there will be the self-interested critics that have a vested interest in not liking what you are doing and they will have reason to misunderstand. And you have to be willing to ignore both types of critics. You listen to them, because you want to see, always testing, is it possible they are right?

But if you hold back and you say, “No, we believe in this vision,” then you just stay heads down, stay focused and you build out your vision.

Generative AI doesn’t come up with unreasonable propositions, since it is trained on a huge body of data that is—almost by definition—reasonable. Perhaps the best way to exemplify this point is through the world of financial investing. If you want a low-cost, low-risk investment solution that allows you to manage your pension savings with as little hassle as possible, get yourself a robo-advisor. It will use sophisticated AI to ensure you track the market index of your choice. But if you want to beat the market, you should get an activist fund manager—someone who is prepared to take a contrarian position, hold their nerve, and reap the rewards.

Needless to say, contrarian investors can get it spectacularly wrong (just like Elon Musk with Twitter), but as long as you are prepared to tolerate losses alongside wins, contrarian positions based on human instincts are still the best bet for market-beating gains. As Financial Times reporter Miles Johnson argued in a 2017 article entitled “When it comes to investing, human stupidity beats AI,” the natural instincts of investors like Warren Buffett can’t be replicated by AI.

“In sum, GenAI makes one of your key jobs as a leader harder because it channels our thinking towards lowest-common-denominator choices when the objective is a distinctive business strategy.”

Imperfection: In most walks of life, we prefer authenticity to perfection. People will pay a premium to go on safari in the wild, rather than drive around a game park, even though they might not see as many animals. We pay more for mined diamonds than lab-created synthetic ones because their small imperfections give them character. (According to The Hollywood Reporter, big-name actors even value the appearance of authenticity more than perfect veneers, which is why they are willing to pay a premium to have imperfections implanted in their smiles when they visit the dentist.)

Generative AI doesn’t generate authenticity. I read a lot of student essays, and most are now written with help from ChatGPT. The grammar and spelling are better than ever, but the sentence construction is formulaic, and the tone is insipid. I long for a quirky argument, and I don’t mind when small errors creep in, because those flaws reassure me that it was a real human who wrote the text.

When it comes to business strategy, authenticity adds value. Consumers don’t want flaws that detract from the usability of products and services. But if they are like me, they will pay a premium for a craft beer (rather than a Corona or Heineken) to experience a new taste. And when it comes to customer service, I would prefer to talk to a call centre worker with a personality rather than one who has memorized a script.

In sum, GenAI makes one of your key jobs as a leader harder because it channels our thinking towards lowest-common-denominator choices when the objective is a distinctive business strategy. The antidote is to rekindle our human qualities of imagination, unreasonableness, and imperfection, and to deploy them proactively in our strategy-making.

THE MORAL IMPERATIVE: Worthwhile and Meaningful Work

Earlier this year, a social media post by fiction author Joanna Maciejewska went viral after touching on the fundamental point that we have always grappled with when a new technology comes along: is it a complement to or substitute for human effort? Does it help us do the things we want to do, or does it render them obsolete? Maciejewska wrote:

You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.

In the business world, there are two schools of thought on this topic, one gloomy and the other optimistic. First, there is the vision of AI as a tool for control or coercion. Consider the call centre worker, whose every utterance is scripted by an algorithm. Call centres are now sufficiently sophisticated that they can adapt the script in real time depending on the intonation and language used by the caller. Students of management will recognize this as a form of “Neo-Taylorism,” named after the late American inventor and engineer Frederick Winslow Taylor.

As the father of scientific management, Taylor made brutal attempts to optimize human effort by scripting a worker’s every move, but these were not that effective and largely disavowed by the human relations movement that followed. But advances in AI now make it possible to put his ideas into practice to an extent that he could not have imagined.

The optimistic view of the future of work is based on the notion that AI will take on the tedious work tasks that we don’t want to perform, enabling workers to do the more interesting parts of our job better. Robotic process automation (RPA) is a basic form of AI that many companies are already deploying to take on such tasks as searching texts, correcting documents, invoice processing, and so on. And this has freed many clerical and professional workers to focus on the more value-added parts of their roles.

Neither the coercive nor the enabling view of AI technology is pre-determined. And nobody should let AI solutions (or the companies selling them) dictate what happens. Leaders are responsible for determining how their workers will be impacted. They must make choices about the type of workplace that they want to create as they strive to meet objectives, and then deploy the latest technologies in ways that support their vision for the future of work.

Personally, I think it is essential for leaders to actively ensure humanity remains in the workplace. If you agree, here is a list of workplace qualities that need to be emphasized based on a body of research called self-determination theory, which summarizes the factors that contribute to an engaging working environment.

Autonomy: In our work and in our lives generally, we want freedom to do things our way. That’s why good leadership is, and has always been, about providing clarity of purpose and then enabling individuals to figure out for themselves how best to contribute to that purpose within a frame. Unfortunately, most organizations do not deliver on this ambition.

A relatively recent European study showed only 35 per cent of employees responding affirmatively to the question, “Can you influence decisions that affect you in the workplace?” And the worry is that AI is making this situation worse. So, as you ponder rolling out the latest piece of AI technology, pause and consider this simple question: what effect will it have on employee discretion?

As discussed above, AI can be used in a coercive way (it tells people what to do) or in an enabling way (it helps them figure out what to do). But while there may be occasions when you need to take a coercive approach (e.g., when operating in a zero-error environment with human safety at stake), you should where possible opt for the enabling approach if you want a productive and engaged workforce.

Belonging: It is human nature to seek a sense of attachment to those around us, and to identity with others in our community groups. Good leadership builds on this need for belonging by creating opportunities for social engagement and encouraging people to help one another. But many organizations also struggle with this. Employee engagement has been a perennial concern in large organizations, and the expansion of remote work that the pandemic spawned (and the digital revolution enabled) made the problem worse.

Post-COVID, it became common practice to go into the office two or three days per week. Many people who work freelance do their jobs from home with no social interaction. So once again, here is reason to pause and think about how you are deploying your digital and AI technologies. Are you atomizing the workplace, or are you creating a platform for conversation? Does the technology push your teams further apart or pull them closer?

Competence: The desire to learn new things and build expertise is hardwired into most of us, and again this is true both at work and in life more generally. The most effective leaders are the ones who set us challenging targets and support us in pursuit of those targets, coaching us, giving us the tools to do our job, and providing growth opportunities.

How does AI affect this process? It is getting better and better at doing many things that professionals used to take pride in, from software programming to copywriting to diagnosing cancer. But for the most part, it is still complementary to human expertise (rather than a complete substitute), so the standard way forward is to encourage people to harness the technology to help them become even more expert. Doctors routinely use AI alongside their own judgment to make diagnostic decisions. Programmers use AI to do the routine parts of their work so they can focus on the more difficult tasks. At business schools, we embrace ChatGPT, and we give students advice on how to use it to take their learning to greater heights.

“When making your management choices, keep in mind there is no contradiction between embracing AI technology and providing worthwhile and meaningful work for employees.”

But there will be some areas where human competence is completely substituted (transcription, copyediting, and language translation are heading this way). Some jobs will inevitably disappear as others emerge. As this occurs, it is the responsibility of leaders to retrain and reposition impacted workers to use their talents as effectively as possible.

In sum, AI is changing the game in many unpredictable ways, and your role as a leader is to choose how best to deploy it to get the most out of your workforce. When making your management choices, keep in mind there is no contradiction between embracing AI technology and providing worthwhile and meaningful work for employees. One important lesson from an earlier wave of the digital revolution is that the productivity benefits from investing in new IT systems were only achieved by companies that adopted innovative working practices. And I suspect it will be the same this time round as well.

CONCLUSION

Many observers have argued that the AI revolution puts us at a unique point in human history. In Homo Deus: A Brief History of Tomorrow, bestselling Israeli author Yuval Noah Harari calls it the great decoupling—where for the first time in human evolution there is a possible schism between consciousness (subjective awareness) and intelligence (problem solving). This decoupling, he argues, creates huge risks for society in terms of inequality and a loss of social cohesion.

My point in this essay is to argue that leaders have the agency and the responsibility to prevent this happening within their own organizations. Their job is to actively recouple consciousness and intelligence, to ensure there is a human quality to their products and services, and to safeguard the features of work that make it worth doing.

I would like to pretend this is a new argument, but it’s not. In his 1982 book Megatrends, futurist John Naisbitt coined the term high tech/high touch, arguing that it is incumbent on us to “balance the material wonders of technology with the spiritual demands of our human race.” Since he made this observation in the early years of the computer revolution, he could not have foreseen at the time just how advanced computing technology would become. But the fundamental leadership challenge he identified remains the same.

Back in 1982, getting the high tech/high touch balance wasn’t easy. With AI, it is getting a lot harder. As a result, it is a good time to remember that leadership is a process of social influence, and it requires attention to both behaviour and character. The best leaders aren’t just good at inspiring others to achieve a challenging goal. They are also clear about what they stand for as individuals. In other words, human integrity around what is right and wrong has always mattered enormously, but it will matter even more as the world becomes increasingly influenced by AI.

REFERENCES

  • Chris Vallance, “AI could replace equivalent of 300 million jobs – report,” BBC, March 28, 2023.
  • Shep Hyken, “AI Will Not Eliminate Jobs,” Forbes, August 27, 2023.
  • Chui, E. Hazan, R. Roberts, A. Singla, K. Smaje, A. Sukharevsky, L. Yee, and R. Zemmel, “The Economic Potential of Generative AI: The Next Productivity Frontier,” McKinsey Global Institute, June 14, 2023.
  • Acemoglu, “The Simple Macro-economics of AI,” Massachusetts Institute of Technology working paper, May 12, 2024.
  • Ray Kurzweil, The Singularity Is Near, When Humans Transcend Biology(Viking Press, 2005).
  • Ray Kurzweil, The Singularity Is Nearer: When We Merge with AI (Viking Press, 2024).
  • Eric Wilson, “AI Spiral into Mediocrity,” LinkedIn, February 14, 2024.
  • Giles Hedger, “The Fallacy of Our Time,” Campaign, April 23, 2015.
  • John Cook, “Jeff Bezos on Innovation: Amazon ‘Willing to Be Misunderstood for Long Periods of Time,’” GeekWire, June 7, 2011.
  • Miles Johnson, “When It comes to investing, human stupidity beats AI,” Financial Times, April 10, 2017.
  • Edward Delci and Richard Ryan, Handbook of Self-Determination Research (University Rochester Press, 2004).
  • Arnal, O. Wooseok, and R. Torres, “Knowledge, work organisation and economic growth.” In Internet, Economic Growth and Globalization: Perspectives on the New Economy in Europe, Japan and the USA, 327–376. Berlin: Springer, 2003.
  • Yuval N. Harari, Homo Deus: A Brief History of Tomorrow (Harvill Secker, 2015).
  • John Naisbitt, Megatrends:Ten New Directions Transforming Our Lives (Warner Books, 1982).