Intelligent Human Thinking About AI

robot sitting on a brain

It isn’t easy to believe everyone who has something to say about artificial intelligence (AI). After all, the emerging technology (or group of technologies) in question has generated jaw-dropping economic-impact guesstimates along with more than a few doomsday scenarios. But believing in the AI revolution becomes easier after listening to Steve Irvine speak on the topic, since he left what most people would consider a dream job in Silicon Valley to start his own AI company in Toronto.

A few years ago, the former Facebook executive returned home to launch Integrate.AI, which uses machine learning and cross-industry intelligence to help clients detect propensity to buy and offer individualized customer experiences. As he noted in a Globe and Mail commentary at the time, “A lot of people immediately assumed I had made the move strictly for personal reasons, as my wife and I are both Canadian. But I’m not here to take a step back in my career. While I’m certainly happy to be closer to family and living again in a great country, I’m moving back because I believe that AI is the biggest technology advancement that we will see in our lifetimes and that Canada is the best place in the world to build a global leading AI company.”

I had the pleasure of meeting Irvine in mid-2019, when I moderated a Q&A session following his keynote presentation at the Ivey Business School’s annual Scotiabank Digital Lab Speakers Series on Digital Transformation in Banking. Irvine’s talk was entitled “Separating Hype from Reality in AI,” and it clearly put AI’s massive potential to reshape business and life into an easy-to-understand narrative (listen to him explain why it’s not all hype here). But the Q&A session that followed Irvine’s presentation also drove home the fact that many people still don’t fathom the level of disruption coming our way.

Over the next decade, AI is projected to boost global GDP by between US$13 trillion and US$15.7 trillion. With so many factors in play, these numbers could prove way off when we do the math in 10 years. But before you let optimistic-sounding guestimates lull your company into a game of wait and see, please note that one of the factors in play is a potential huge gap in ROI achieved by early and late adopters. Indeed, as researchers at McKinsey pointed out in a 2018 discussion paper, late adopters “might find it difficult to generate impact from AI, because front-runners have already captured AI opportunities and late adopters lag in developing capabilities and attracting talent.”

Simply put, waiting to see how AI works out for competitors only improves your organization’s chances of becoming one of the disrupted. So, to help raise awareness of the big picture, here are six things everyone needs to know (or worry about) when planning for the future.

Failure Drives Progress

A fatal accident involving one of Uber’s autonomous test vehicles in March 2018 was recently attributed to an “inadequate safety culture” by the U.S. National Transportation Safety Board. As Statista’s Felix Richter recently pointed out, the incident fuelled doubts about the safety of fully autonomous vehicles, with 60 per cent of the respondents in a 2019 YouGov poll reporting they would feel unsafe around self-driving cars.

graph on "vehicles are ready for autonomy- but are we?"

Being a subcategory of AI, autonomous vehicles naturally garner a lot of attention when they fail—which is a good thing, since progress shouldn’t require putting anybody at risk. But as Richter also noted, industry analysts still forecast that about 750,000 “autonomous-ready” vehicles will be on roads worldwide by 2023. And this isn’t just wishful auto-sector thinking.

Like all emerging technology, AI has had its share of early-development failures. But don’t let the setbacks convince you that all the talk about revolutionary change is just another round of tech-sector hype. After all, as Ivey Dean Sharon Hodgson points out, the failures indicate that the AI revolution is real.

Hodgson was recently selected to lead Ivey in the Age of Disruption because her expertise ranges from leading global businesses and running technology-enabled process transformations to artificial intelligence and advanced analytics. And as she explained in the school’s Intouch alumni magazine: “When we talk about AI, there is a lot of focus on high-profile failures. That’s not surprising because it’s emerging tech and there are more failures than successes in early deployments. But the focus on failure is misleading because in my experience pretty much every project I’ve seen or been involved with, whether it’s been successful or not, has taught us something. These pilot programs are not typically black-or-white experiments. They might be designed to solve a problem, but in the process of trying to achieve your goal, there are learning gateways you go through, and with each gate passed, intellectual value gets created. So, every project out there, whether it is deemed a success or not, moves us forward. I think that process of learning through pilots is massively misunderstood.”

AI Isn’t the Future—It’s Now

Remember all the desperate infighting by sales staff plotting to get their hands on the so-called “Glengarry leads” in the movie Glengarry Glen Ross? The leads in question were considered worth fighting for because the information they contained had the power to instantly transform underperformers on the verge of being fired into top closers worthy of generous rewards (and free coffee). Now imagine you run a sales organization that generates only magical Glengarry leads—while the competition doesn’t.

Adopting AI is all about gaining this kind of super-sized competitive advantage, which is why a massive gap in ROI could appear between early and late adopters.

In the IBJ Insight “AI-Driven Competitive Advantage Isn’t the Future—It’s Now,” A.T. Kearney’s Arjun Sethi and Piyush Dubey point out that too many people see AI as something that’s coming soon to a business near you when the technology has already been widely deployed. Reporting on a comprehensive scan of how more than 90 leading companies in 15 different industries are using AI, the authors note that about 37 per cent of the use cases documented are already implemented at scale and delivering meaningful benefits. On average, these companies reported achieving an ROI of 15 to 30 per cent within 24 months, while another 49 per cent of the use cases examined are in the prototype and production phases.

With AI applications appearing poised for substantial near-term growth, Sethi and Dubey urge reluctant business leaders “to shift from fearing AI as a disruptive force to more fully embracing AI’s demonstrated capacity to significantly improve performance, customer experience, value generation, and cost efficiency. The winners will be those who have been more proactive rather than reactive in embracing AI.”

Humans Need to Reprogram

Nobel laureate economist Christopher Pissarides doesn’t think technological innovation will lead to long-term changes in unemployment rates. As he noted in a Globe and Mail commentary, history suggests that today’s big economic challenge is training people to do the new jobs created by emerging technologies, not dealing with jobs that they eliminate.

As Pissarides writes, “Just as some jobs benefit from the new technologies, while others become obsolete, so, too, some skills become more valuable, while others are substitutable.”

The key to healthy future employment numbers appears to be embracing the fusion of human and machine talent. Accenture’s James Wilson and Paul Daugherty call this the “missing middle,” where humans effectively collaborate with artificial intelligence rather than see it as a rival. In the IBJ Insight “Tools for Human Success in an AI-Driven Workplace,” they outline eight skills that human workers need to develop in order to thrive in an AI-driven marketplace. And the skills in question are not all about technology—they’re about fostering creativity, sound judgement, working well with others (including bots), and anticipating the unexpected.

The good news is that humans appear open to reskilling, at least according to a 2019 survey of about 5,000 U.S. knowledge workers—conducted by Britain’s Blue Prism, a leading supplier of workplace robots—in which 87 per cent of respondents reported being willing to learn how to best work with intelligent machines.

“Office workers are not just willing to work with digital colleagues—more than a few would rather report to one than their current boss.”

No Clear Robotic Glass Ceiling

Upper management be warned. Office workers are not just willing to work with digital colleagues—more than a few would rather report to one than their current boss. In a U.S. survey of 1,080 managers (aged 25 to 55), 20 per cent of respondents indicated that they would gladly work for a robot. And that number increased to 30 per cent when the upgraded supervisor in question was described as “friendly like the C3PO robot from Star Wars.”

Is this possible? In the IBJ Insight “Law and AI,” McMillan lawyers Rish Handa and Sophie Papineau-Wolff envision a future with digital legal assistants. But what about a roboss?

Well, never say never, says Blue Prism CEO Alastair Bathgate. In an executive Q&A on the future of work and automation, he argues that the workplace robots being developed today will mostly be remembered as freedom fighters who liberated humans from mind-numbing and time-consuming tasks. But he freely admits wondering whether the future will have a totally digital organization, maybe with one human for regulatory purposes.

“I can remember a funny guy I was working with years ago speculating about the future of data-centre staffing,” Bathgate explains. “The ideal model, he suggested, was a totally automated operation overseen by a human and a dog. The human was there to feed the dog. The dog’s job was to keep the human away from the computer equipment. But that joke isn’t just funny anymore. I don’t think we are ready to have dogs keep humans away from decision making. But I can envision executive-level robots. Why not? Think of all the time that typically goes into gathering, analyzing, and discussing the data required for some decision making, which often ends up as just a gut call anyway. Obviously, there are limitations. But in some areas of decision making, I can envision a digital executive authorized to analyze all the available accurate information that exists inside and outside an organization and then issue execution commands to the appropriate workers—human or robotic. As opposed to winging it, or wasting time on briefings or discussion meetings, the digital executive would make a purely data-based decision. That would be a great application of AI, wouldn’t it?

AI Isn’t All Rosy

With all the potential to do good things and make a ton of money along the way, it is only natural for the future-is-bright camp to downplay the dark side of AI. But you don’t have to be a risk-assessment specialist to worry about how unprepared the world is for what is coming our way, even if job-market disruption actually proves to just be a short-term economic hiccup.

The future seems designed to have autonomous weapons circling the earth while AI is deployed on the ground to control populations, manipulate elections, and promote civil unrest for strategic reasons. As Russian President Vladimir Putin reportedly put it: “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Putin, of course, is just talking about a world in which AI remains under human control, which could change. And that’s why more than a few big brains have raised an alarm that is so alarming that it is essentially being ignored. But you don’t have to believe in a Skynet scenario to worry about the impact of AI.

As things stand on the privacy front, data hungry companies have no incentive to really care about privacy standards. In fact, according to Privacy Commissioner Daniel Therrien, even government agencies like Statistics Canada collect and use consumer data in ethically challenged ways. And with the growing ability to analyze online footprints and images, very little about our future lives will be protected from algorithms—which can already predict what makes us tick politically and click commercially.

Indeed, everything from personal spending habits and exercise routines to sexual fantasies, attention spans, and what we eat, drink, and smoke is theoretically up for grabs. How much of this information is used to offer—or deny—people jobs, products, and services like health insurance is a huge issue even for ethical organizations, since it can be impossible to reverse engineer the logic behind some complex AI-based decisions.

As Pegasystems VP Rob Walker notes in “Governing Algorithms,”  the answer for organizations with good intentions “is to balance effectiveness with responsibility by investing in methodologies and solutions that let users create an automated AI policy that flags algorithms that put them at risk in certain areas. This will keep everyone honest and ensure no one—and nothing—goes off the rails.”

But since AI is a double-edged sword, we also need to worry about protecting consumers from dishonest folks with bad intensions. For example, if AI can be used to identify linguistic cues in corporate disclosure documents in order to help investors avoid companies downplaying risk, then it can also power the next generation of investment scams.

As my latest Big Picture column in Financial Post Magazine noted, algorithms are already reportedly kicking human butt when it comes to pushing financial products at JPMorgan Chase & Co., moving the bank’s chief marketing officer, Kristin Lemkau, to declared in July: “Machine learning is the path to more humanity in marketing.” And that statement could prove naïve since the power of persuasion is often abused.

With AI driven by data, there is also the question of environmental impact. In a recent Globe and Mail commentary on the environmental repercussions of streaming, Jane Kearns, vice-president of growth services at MaRS Discovery District, noted: “Hulu doesn’t come out of a tailpipe. Cows don’t belch Spotify. Birds don’t drown in Twitch ponds. But while we can’t see the environmental repercussions of streaming, that doesn’t mean they don’t exist.”

According to Kearns, the world already has more than eight million data farms using more than 200 terawatt hours a year worldwide, the equivalent of Australia’s annual electricity consumption. And data consumption is going to jump dramatically as AI advances. “Maybe the efficiencies and abatements will eventually grow to match our consumption,” she writes. “But for the moment, we can’t fix this problem until we recognize it.”

Ironically, the collective human impulse to essentially ignore serious issues created by excessive consumption is one of the reasons AI is seen as a potential threat to our dominance of the planet.

Authentic Leadership Required

The Honourable Perrin Beatty, head of the Canadian Chamber of Commerce, was recently in Ottawa to give Ivey’s annual Thomas d’Aquino Lecture on Leadership at the National Gallery. The address was entitled “Canada Adrift in a World without Leaders.” But that was somewhat misleading because there is no shortage of leaders on the planet. “The issue,” Beatty noted, “is whether the quality of leadership we see is up to the existential challenges that confront humanity.”

As lvey Professor Gerard Seijts and I pointed out in “Rounding Out the U.S. Call for Social Corporate Purpose,” capitalism needs more than a new definition of executive responsibilities to survive the significant social and economic challenges ahead. Today’s marketplace is stocked full of talented executives with good intentions. But if the financial crisis taught us anything, it is that sustainability requires an increase in the number of corporate managers and directors with the character required to recognize leadership is a privilege, not a right, and put integrity before job security as a result.

Indeed, as we noted in “Recognizing Jane Philpott’s Ongoing Contribution to Canada,” when Philpott resigned from Prime Minister Justin Trudeau’s cabinet over the SNC-Lavalin affair, the former Canadian cabinet minister set an indisputable example of the kind of quality leadership needed—in politics and business—to meet the daunting list of challenges identified in Beatty’s speech.