Monday, December 23, 2024
Uncategorized

Defining and understanding responsible AI at the company level 

For every company rushing to adopt AI, it seems there is a frightening prediction about the technology outsmarting humans or taking their jobs.

Theres a good reason for all that speculation: Responsible AI hasn’t been defined yet, said panelists at Fortune’s Most Powerful Women Summit in Laguna Niguel, Calif.

Academics, governments, and firms simply don’t know the answer yet, explained Susan Athey, former chief economist of the Antitrust Division for the U.S. Department of Justice and professor at Stanford Graduate School of Business.

“This is going to be a research agenda that is joint between academics, government, and industry,” Athey said. “I think it’s a 10 year journey, at least, until we really get to have an answer to like, can we say that this system is deeply responsible in all of the important ways?” 

AI regulations don’t solve everything

Workday’s vice president and chief privacy officer, Barbara Cosgrove, said that regulation alone won’t solve for responsible AI. Right now, responsible AI is defined at the company level, and a lot of it is values-driven and based on what can legally be done, Cosgrove said. Workday, she said, looked at how it wanted to set the guardrails for responsible AI for the company and made sure to set its AI ethics principles at that level, so that it could then build governance programs.

“Our most important,” Cosgrove said, “is making sure that we are taking a human-centric approach, that we are not replacing humans, that humans are going to still be at the center of what we do. We’re improving experiences, we’re helping to amplify human potential, but we’re not replacing it.” 

Karin Klein, founding partner at Bloomberg Beta, explained that responsible AI really means taking a step back and looking at the process of data flowing through an organization and out to a customer. 

“So starting with what data is being used, is there equity and transparency around the data set? How [are] the algorithms and models being created? What are the applications of the output? Then rigorously testing and taking a step back and continuing [to look] to see, are those values that are originally modeled being consistent,” Klein shared. 

Privacy risks

And, establishing and understanding responsible AI depends on an individual business, and what that business is trying to accomplish, Credit Karma’s chief legal officer, Susannah Stroud Wright, said. We’re at a point where you have to assume that something can, or will, go wrong when establishing how to develop and use AI responsibly, Klein said—that’s something she looks for when backing founders. 

“The world we’re living in now, with AI, you have to assume something’s gonna go awry,” Klein said. “There’s going to be a hallucination, there might be data that gets used in a way it shouldn’t. So as long as the people that you’re working with, whether it be a startup or a big company, [have] the right focus around transparency and communication, you’re going to be well positioned to navigate the challenges.”

As for privacy, and the impact AI may have on privacy, the two can coexist despite what some might say, Cosgrove said—there are already privacy laws and regulations that apply to the use of AI. 

“They don’t clash,” she said. “But I do often hear that ‘I’m just going to stop the use of AI because I’m worried about privacy,’ and there’s no stopping it. I mean, everybody’s organizations are already using it.”

Consumers and businesses both want privacy, but businesses that are AI customers themselves are demanding it, Athey said, so the market is responding. And, of course, the companies promising privacy actually have to be delivering on their promises, she said, but it’s moving the privacy discussion as it pertains to AI forward. 

source

Leave a Reply

Your email address will not be published. Required fields are marked *