Friday, November 22, 2024
Business

The next AI winter could be caused by users’ trust issues—and ‘mindful friction’ can keep it from happening

The lighting-fast advancements in AI have necessitated some guardrails and a developing philosophy on how to ethically incorporate the technology into the workplace. AI should play the role of co-pilot alongside humans—not exist on autopilot, Paula Goldman, Salesforce’s chief ethical and humane use officer, said during Fortune’s Brainstorm AI conference in London on Monday.

“We need next-level controls. We need people to be able to understand what’s going on across the AI system,” she told Fortune’s Executive News Editor Nick Lichtenberg. “And most importantly, we need to be designing AI products that take into account what AI is good at and bad at—but also what people are good at and bad at in their own decision-making judgments.”

Goldman’s main worry among the growing body of users’ concerns is AI’s ability to generate integrous content, including those free from racial or gender biases and excessive user-generated content such as deepfakes. She warns unethical applications of AI could curtail the technology’s funding and development. 

“It’s possible that the next AI winter is caused by trust issues or people-adoption issues with AI,” Goldman said.

The future of AI productivity gains in the workplace will be driven by training and people’s willingness to adopt new technologies, she said. To foster trust in AI products—particularly among employees using the applications—Goldman suggests the implementation of “mindful friction,” which is essentially a series of checks and balances to ensure AI tools in the workplace do more good than harm. 

What Salesforce has done to implement ‘mindful friction’

Salesforce has started keeping in check potential biases in its own use of AI. Indeed, the software giant has developed a marketing segmentation product that generates appropriate demographics for email campaigns. While the AI program generates a list of potential demographics for a campaign, it’s the human’s job to select the appropriate demographics so as not to exclude relevant recipients. Similarly, the software company has a warning toggle pop up on generative models on its Einstein platform that incorporate zip or postal codes, which are often correlated with certain races or socio-economic statuses.

“Increasingly, we’re heading toward systems that can detect anomalies like that and encourage and prompt the humans to take a second look at it,” Goldman said.

In the past, biases and copyright infringements have rocked trust in AI. An MIT Media Lab study found that AI software programmed to identify the race and gender of different people had less than a 1% error rate in identifying light-skinned men, but a 35% error rate in identifying dark-skinned women, including well-known figures such as Oprah Winfrey and Michelle Obama. Jobs that use facial recognition technology for high-stakes tasks, such as equipping drones or body cameras with facial recognition software to carry out lethal attacks are compromised by inaccuracies in the AI technology, Joy Buolamwini, the study’s author, said. Similarly, algorithmic biases in health care databases can lead to AI software suggesting inappropriate treatment plans for certain patients, the Yale School of Medicine found.

Even for those in industries without lives on the line, AI applications have raised ethical concerns, including OpenAI scraping over hours of user-generated YouTube content, potentially violating copyrights of content creators without their consent. Alongside its spread of misinformation and inability to complete basic tasks, AI has a long way to go before it can fulfill its potential as a helpful tool to humans, Goldman said.

But designing smarter AI features and human-led failsafes to bolster trust is what Goldman finds most exciting about the future of the industry. 

“How do you design products that you know what to trust and where you should take a second look and apply human judgment?”

source

Leave a Reply

Your email address will not be published. Required fields are marked *