Friday, November 22, 2024
Technology

Making AI trustworthy: Can we overcome black-box hallucinations?

Like most engineers, as a kid I could answer elementary school math problems by just filling in the answers.

But when I didn’t “show my work,” my teachers would dock points; the right answer wasn’t worth much without an explanation. Yet, those lofty standards for explainability in long division somehow don’t seem to apply to AI systems, even those making crucial, life-impacting decisions.

The major AI players that fill today’s headlines and feed stock market frenzies — OpenAI, Google, Microsoft — operate their platforms on black-box models. A query goes in one side and an answer spits out the other side, but we have no idea what data or reasoning the AI used to provide that answer.

Most of these black-box AI platforms are built on a decades-old technology framework called a “neural network.” These AI models are abstract representations of the vast amounts of data on which they are trained; they are not directly connected to training data. Thus, black-box AIs infer and extrapolate based on what they believe to be the most likely answer, not actual data.

Sometimes this complex predictive process spirals out of control and the AI “hallucinates.” By nature, black-box AI is inherently untrustworthy because it cannot be held accountable for its actions. If you can’t see why or how the AI makes a prediction, you have no way of knowing if it used false, compromised, or biased information or algorithms to come to that conclusion.

While neural networks are incredibly powerful and here to stay, there is another under-the-radar AI framework gaining prominence: instance-based learning (IBL). And it’s everything neural networks are not. IBL is AI that users can trust, audit, and explain. IBL traces every single decision back to the training data used to reach that conclusion.

By nature, black-box AI is inherently untrustworthy because it cannot be held accountable for its actions.

IBL can explain every decision because the AI does not generate an abstract model of the data, but instead makes decisions from the data itself. And users can audit AI built on IBL, interrogating it to find out why and how it made decisions, and then intervening to correct mistakes or bias.

This all works because IBL stores training data (“instances”) in memory and, aligned with the principles of “nearest neighbors,” makes predictions about new instances given their physical relationship to existing instances. IBL is data-centric, so individual data points can be directly compared against each other to gain insight into the dataset and the predictions. In other words, IBL “shows its work.”

The potential for such understandable AI is clear. Companies, governments, and any other regulated entities that want to deploy AI in a trustworthy, explainable, and auditable way could use IBL AI to meet regulatory and compliance standards. IBL AI will also be particularly useful for any applications where bias allegations are rampant — hiring, college admissions, legal cases, and so on.

source

Leave a Reply

Your email address will not be published. Required fields are marked *