Why it’s Time to Retire the Term “Artificial Intelligence”

The discourse about artificial intelligence, and generative AI specifically, has long been dominated by breathless hype. Last year, Katie Britt, a U.S. Senator from Alabama, and Jacob Helberg, an adviser to Palantir Technologies, insisted Hollywood science fiction was about to become reality. “Artificial-intelligence-enabled humanoids capable of blending seamlessly into our world will soon be a part of everyday life,” they noted in a Wall Street Journal opinion piece that implied we all could soon be rubbing shoulders with R2-D2 and C-3PO from Star Wars.

Intending no disrespect, this statement is patently absurd. To be fair, Britt and Helberg have plenty of illustrious company when it comes to making bold but questionable claims about AI achieving human-level capabilities. Last year, Tesla CEO Elon Musk told a conference in Riyad that it could happen by the end of this year. By 2027, Anthropic CEO Dario Amodei says AI could even surpass human capabilities.

When it comes to these sorts of predictions, of course, the word “could” typically bears a tremendous amount of weight. And therein lies the problem. While big, exaggerated statements about AI are great for drawing eyeballs, the hype they generate has real consequences. It distracts business and political leaders from more pressing issues, and it leads to costly misallocations of resources and investment.

A 2024 KPMG survey found that 43 per cent of U.S. companies with at least US$1 billion in annual revenue intended to invest at leastUS$100 million in generative AI over the next 12 months. That’s despite uncertainty surrounding related ROI. According to an IBM study published late last year, 47 per cent of companies already reported a positive ROI from their AI investments. As the study optimistically noted, that’s “almost half.” But the point here is that more than half are not reporting positive results. Meanwhile,  according to Sequoia Capital data reported by The Wall Street Journal last year, companies spent US$50 billion on chips from Nvidia to train AI in 2023 but realized only US$3 billion in revenue. Yet the hype persists.


“While big, exaggerated statements about AI are great for drawing eyeballs, the hype they generate has real consequences. It distracts business and political leaders from more pressing issues, and it leads to costly misallocations of resources and investment.”

Cui bono? The hype around AI invariably benefits the people generating it. This ranges from the journalists grabbing eyeballs with clickbait headlines and the academics raising profiles with revolutionary predictions to the investors pumping up the value of their investments and the consultants convincing clients to pay big fees to avoid the existential threat of being left behind.

Simply put, there’s less money in being skeptical or even reserved, so the hype persists and fuels greater hype. Until counteracted by the lagging but relentless pressure of reality, hype is self-perpetuating by nature. As things stand, for those who expect R2-D2 and C-3PO to cross over from fiction to reality, the fear of missing out on the touted revolutionary potential of AI still vastly outweighs the fear of overinvesting in AI or focusing too much attention on AI to the detriment of other priorities.

Spouting hyperbole about AI also can flatter ego. Many seem to take delirious pleasure in issuing pronouncements not about its positive potential, but about the existential threat it poses to humanity. While somewhat unprecedented in the business world, this has clear analogues in other realms of human activity. Witness the prevalence of apocalyptic prophecies in many religions, and peculiar delight many adherents of those religions take in ruminating on doomsday scenarios. Peculiar, but understandable. While others live in The Matrix, oblivious to the coming End Times, being part of the elect is rather sexy.

AI also combines mystery and accessibility. Most people do not understand how a transformer coupled with a large language model (LLM) works, nor how the latest “reasoning” models operate, any more than most of us really understand how a quantum computer works. Quantum computing may prove to be far more transformational than current LLMs and AI “reasoning models” but the hype differential between quantum computing and AI is striking. Surely a major reason for this is that many of us have ChatGPT or Claude or Gemini on our phones and spend a lot of our time experiencing their uncanny ability to emulate human conversation. Many of us are also beginning to interact with AI agents. Meanwhile, none of us has a personal quantum computer to play with.

AI hype is not tulip mania. While most outlandish pronouncements about AI technology warrant skepticism, the field has made major strides in the last few years and current AI tools are undeniably powerful. Nonetheless, all the positive hype hampers making wise decisions about how much, and where, to invest in AI. Likewise, Skynet-inspired AI doom porn impedes the important and practical work of assessing real risks attendant on different AI applications and how much and how best to invest in various forms of mitigation.

So how can we combat the deleterious consequences of delusional AI hype? Hopefully, by diagnosing the drivers behind it, we can diminish its intensity and the way it is currently distorting clear thinking. Meta chief scientist Yann LeCun says it would be more accurate to refer to artificial general intelligence (AGI) as “advanced machine intelligence.” So perhaps it’s time to replace the misleading term “artificial intelligence” with a more accurate label, such as “simulated intelligence.”

Intelligence is a notoriously slippery concept to define. Definitions are highly contested, and this alone should make us wary of wanton applications of the word. That said, constitutive elements of intelligence include the ability to perceive information, retain it, and subsequently apply it in future situations in ways that are context-adaptive. Let’s further define “context-adaptive” actions as those that achieve a goal under variable conditions. The foregoing nets out to a simple definition of intelligence: the ability to perceive and solve problems. 

Current generative “AI” tools do not actually “perceive” input data, because perception entails awareness. Moreover, for an entity to be able to engage in problem-solving, it first needs to translate stimuli (input data) either about its environment/context or its own internal state (or a combination of both) into recognition of a problem (be that hunger, or how the company can increase EBITDA) and then weigh different courses of action. Call that agency. (As an aside, perhaps something unique about human intelligence is its seemingly limitless drive and ability to define new problems, like finding ways to represent three-dimensional space in two dimensions—from occlusion, to shading, to vanishing points, to the cubism of Braque and Picasso.)

There’s a long history of admonition against anthropomorphizing in the study of animal intelligence. (Set aside for the moment fascinating questions arising more recently in the study of whether, and if so in what sense, plants and fungi are intelligent.) Current thinking balances this admonition against the opposite error—that of minimizing commonalities in human and other animal cognition, including the experience of emotions. Regardless of how this debate plays out, the use of anthropomorphized language to describe generative AI tools is highly misleading, overstating both risks and benefits, often wildly misrepresenting AI’s functional capabilities, and thereby compromising decision-making about how much to invest in and where to focus development and deployment of such tools.

Like many other technologies that preceded them, AI tools are indeed powerful and useful human aids with a stochastic nature that lends them an air of mystery. But they lack agency and are not intelligent problem-solvers. I state this not to discount the current value of simulated intelligence tools in a variety of applications, nor what these tools could be able to do in the future as development further evolves. But in the meantime, better decisions will be made if we use more accurate (de-anthropomorphized) language to describe what AI tools, or simulated intelligence tools, can do and how they do it.

About Author

+ posts

Jonathan Hughes has consulted to Global 2000 clients for more than 25 years. He has worked with leading companies across a range of industries in the Americas, Europe, Asia, and Africa, with a focus on growth and competitive strategies, innovation, and supply chain transformation. He recently founded Pareto Frontier Strategies, a start-up focused on using multiple AI technologies to transform complex B2B negotiations. In addition to Ivey Business Journal, Hughes’ articles have appeared in Harvard Business Review, California Management Review, MIT Sloan Management Review, and other leading journals. He graduated with honors from Harvard University with a degree in philosophy, and has been a guest lecturer at the Fuqua School of Business at Duke University, the Darden School of Business at the University of Virginia, the US Military Academy at West Point, the Wharton School of Business at the University of Pennsylvania, and the Advanced Program of Instruction for Lawyers at Harvard Law School. Hughes can be reached at jhughes11489@gmail.com.

Leave a Reply

Your email address will not be published. Required fields are marked *