Governing Algorithms

Artificial intelligence brain in network

As any fan of comic-book superheroes can tell you, Peter Parker’s Uncle Ben wasn’t as technically gifted as his web-slinging nephew. But he still offered some words of wisdom that anyone deploying artificial intelligence (AI) technology today should heed.

After all, one of Uncle Ben’s famous homilies was “with great power comes great responsibility.”

Like the radioactive arachnid that turned Parker into Spider-Man, AI provides humankind with seemingly magical superpowers. In the workplace, it enables organizations to personalize experiences, streamline workflows, automate repetitive tasks, and build prediction algorithms at a scale no human could ever hope to achieve.

Using these AI superpowers, businesses can infer a lot about individuals, probably more than most people realize. AI can infer individual preferences, spending patterns, browsing behaviour, and propensity to buy. But that’s not all. Following digital breadcrumbs (e.g., likes), AI can also infer our gender, sexual orientation, income, ethnicity, marital status, religious beliefs, and even political views as factors in its equations. And that’s why deploying the power of AI comes with great responsibility.

As I note in a white paper entitled “Artificial Intelligence in Business: Balancing Risk and Reward,” judicious control is critical because deploying uncontrolled AI for certain business functions may cause regulatory issues and unethical behaviour that put your organization at risk.

Keep in mind that not all AI algorithms are the same. Not completely unlike a human brain, opaque algorithms—which include deep neural networks, the output of some genetic algorithms, and other types of mathematical models—can be highly accurate, powerful, and useful. But it’s the kind of AI that beat humankind at Go while no one fully understood how it played the game.

Opaque algorithms have so many layers that at present there’s really no way for a human to understand the logic behind predictions being made. As a result, an explanation of an opaque model can at best be an approximation. Decisions made by transparent AI, on the other hand, can be reverse engineered, if necessary, because the models and patterns it uses can be expressed in if–then–else or decision tree formats.

Why is this an issue? Think risk management. When an opaque lending algorithm is used to determine consumer credit limits, for example, the bank feeding it data might believe it’s providing neutral information (which may not even include gender as an attribute). But if the data contains hidden gender bias, the AI model will likely make biased decisions. And with no way to inspect opaque AI’s inner workings, substantial damage can be inflicted before it becomes clear that the algorithm’s logic was sexist or racist or biased in some other manner—leaving the bank in legal and ethical hot water.

“Facebook was sued for ad discrimination after minorities, women, and the elderly were blocked from seeing certain ads. Apple was accused of being sexist over drastically higher credit card limits for men versus women. None of this behaviour was intentional.”

Simply put, AI models are only as fair and unbiased as the data on which they’re built. Facebook was sued for ad discrimination after minorities, women, and the elderly were blocked from seeing certain ads. Apple was accused of being sexist over drastically higher credit card limits for men versus women. None of this behaviour was intentional, but both companies learned the hard way that they need more governance over their algorithms.

Transparency about how an AI algorithm makes predictions and decisions is becoming more and more important, especially in highly regulated industries. The EU General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and Brazil’s General Data Protection Law (LGPD) all mandate that companies be able to explain how they reach certain algorithm-based decisions about their customers, or else risk massive fines for non-compliance. These regulations are just the tip of the iceberg, as many more jurisdictions are likely to follow suit with their own guidelines.

The value that consumers place on the brands they buy from shouldn’t be underestimated either. They’ll often seek out organizations that treat their customers with great care. Companies that discriminate, whether intentionally or not, due to rogue algorithms are digging themselves into a hole they may not be able to get out of.

Given the risks, companies need a way to identify the transparency/opacity of all their predictive models so they can flag and regulate those that can put them at risk. With hundreds of algorithms, it’s impossible to do this manually. That said, human supervision can also help reduce the risks of AI models becoming discriminatory or unethical by providing guidelines for what’s acceptable and what isn’t. In other words, companies need a robust quality assurance approval process for model development and execution—one that is augmented with AI-specific best practices to ensure the model’s results are in line with corporate policies, ethical correctness, brand considerations, and regulatory requirements. Bias testing can also help the AI monitoring function identify systems that unintentionally discriminate according to gender, age, ethnicity, and so on. After all, a model’s output is a result of the data it learns on as well as the algorithms with which its designers equip it.

Companies that can safely deploy AI algorithms by identifying their level of transparency and constantly monitoring their use will not only mitigate potential risks and maintain regulatory compliance—they will also have peace of mind. But while an effective AI policy will enable companies to more easily comply with impending legislation, the question of when to use opaque or transparent AI isn’t just about satisfying regulators. It’s a discussion that veers quickly into ethics and morality. And the solution isn’t simply banning opaque algorithms because—not constrained by the requirement to be “simple enough” for explanation—the algorithms can be more powerful than their transparent brethren. While an opaque algorithm deciding who gets and doesn’t get a loan would violate GDPR and equivalent legislation, it may also prevent more borrowers from getting in financial trouble by more accurately predicting the probability of default.

Despite the concerns around ethics, AI is an important, probably critical tool in any B2C organization’s arsenal, and as AI becomes more pervasive and powerful, the onus is on company leaders to ensure it’s used responsibly. But using simple algorithms or avoiding AI altogether is not the answer because opaque AI algorithms provide too much value to organizations and customers.

The answer is to balance effectiveness with responsibility by investing in methodologies and solutions that let users create an automated AI policy that flags algorithms that put them at risk in certain areas. This will keep everyone honest and ensure no one—and nothing—goes off the rails.

Leave a Reply

Please submit respectful comments only, including full name, professional title, and contact information (only name and title will be posted). Required fields are marked *