Monday, November 25, 2024
Uncategorized

Capturing AI benefits: How to balance risk and opportunity

It has been estimated that by 2030, up to 30% of the hours Americans work are tasks that could be automated, a trend accelerated by generative artificial intelligence enhancements that began to surface late last year.

But whose responsibility is it to determine how the latest innovations from generative AI will be implemented across the economy in the years to come? And how can it be handled responsibly? Experts say all of us are accountable. 

“We need a myriad of skill sets to be able to do this work well,” said Lara Liss, chief privacy officer at Walgreens Boots Alliance, speaking at a Fortune Brainstorm AI virtual conversation on Thursday, which centered on how to balance the risks and opportunities of AI.

Liss said when it comes to responsible AI, people should raise their hands and become more involved now. And this work isn’t just for AI experts or computer scientists—input should vary from programmers to finance, and marketing to HR. 

“They need to have a baseline understanding of how responsible AI works and what they need to be looking at within their organization,” said Liss.

Along those lines, the National Institute for Standards and Technology (NIST) in January released an AI risk management framework that included 72 outcomes organizations should aspire to meet to reduce discrimination, make systems more transparent, and overall ensure that the AI systems that are built are trustworthy. 

“Ultimately, what we’re trying to get organizations to do is change their culture to be more in line with considerations of risks for AI, just like we are right now for cyber risk or privacy risk,” said Reva Schwartz, research scientist and principal investigator for AI Bias at NIST.

Research by consulting giant Accenture shows that 95% of executives believe that risk management for AI should be one of their top priorities, but only 6% of organizations believe they are ready with a responsible AI foundation to help them navigate the fast-developing technology.

Accenture believes that the top generative AI risks include the impact on the workforce; privacy and intellectual property; bias; and hallucinations, which are the negative impacts of decisions made on inaccurate information in large language models. 

Arnab Chakraborty, Accenture’s senior managing director of global responsible AI, said the ultimate responsibility of assessing where AI is used within an organization must start with the CEO. At Accenture, AI decisions come from a steering committee that includes the company’s CEO, chief technology officer, general counsel, and chief operating officer. 

“This has to happen at the board level, at the CEO and executive committee level, and this is more for them to understand, appreciate, and be the active sponsors walking the talk,” said Chakraborty.

“Without strong governance, it cannot stick and take hold,” said Schwartz.

Liss echoed the sentiment shared by Chakraborty and Schwartz. “People take this responsibility incredibly seriously,” said Liss. “And I think you’re hearing in boardrooms, in C-suites, and throughout companies at all levels that people understand that we need to get this right.” 

Amy Tong, California’s secretary of government operations, said generative AI has the potential to drive innovation that would benefit the biggest state economy in the U.S. “It has the potential to really push the bounds of human creativity and capacity,” said Tong. “But we have to do it in a very responsible and measured manner.” 

In September, California Gov. Gavin Newsom signed an executive order to study the development, use, and risks of AI and develop a process to evaluate and develop the technology within state government. 

“We recognize both the potential benefits and risks these tools enable,” said Newsom when announcing the executive order. “We’re neither frozen by the fears nor hypnotized by the upside.”

Alongside the advancements that generative AI will bring to business comes fears about how it will impact workers. Nearly four out of 10 U.S. workers are worried that AI may take some, or all, of their job responsibilities.

“When it comes to the workforce, we should have the recognition that there is uncertainty,” said Tong.

Experts at Fortune’s event unanimously agreed that one key way to address concerns about the emerging technology and the workforce is to focus on training. 

“It’s important for companies to be looking at a few levels of training,” said Liss. “The first is, what globally does the entire workforce need to understand about responsible AI and how AI will be developed within the organization.” 

There’s also targeted training, which would include compliance frameworks and training for data scientists to ensure they are fully knowledgeable about the testing and technical skills they need to use when working on AI models. But all of that said, every worker won’t need to become an AI expert. 

“This idea that we all need to learn about the intricate goings-on inside a machine learning model isn’t really necessarily going to be useful,” said Schwartz. “A better use of time is to make sure that everybody in the enterprise is familiar with responsible AI practices.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *