Monday, December 23, 2024
Business

SoftBank, Mastercard, and Anthropic cyber chiefs sound alarms on AI phishing and deepfakes—but those aren’t the only things keeping them up at night

When over 100 top cybersecurity leaders gathered in July at a retreat center in the California redwoods near San Jose, the serene sounds of rustling needles did not detract from discussions about how to deal with the latest AI-driven threats. 

Team8, the venture capital firm behind the event, surveyed the group—including Fortune 500 CISOs—and found that AI-powered phishing attacks and the rise of deepfakes had emerged as top concerns, just a year after many in the cohort had hoped generative AI would be nothing more than a passing fad. Three-quarters said that fighting AI phishing attacks–or phishing campaigns that make email, text or messaging scams more sophisticated, personalized, and difficult to detect–had become an uphill battle. Over half said deepfakes, or AI-generated video or audio impersonations, were becoming an increasingly common threat. 

However, Fortune spoke exclusively to several retreat attendees who said that while AI phishing and deepfakes certainly rank highly as current cybersecurity concerns, there are other issues keeping them up at night when it comes to the growing risks of AI-related cyber attacks on their companies. 

Company data exposed and even creepier deepfake scams

Gary Hayslip, chief security officer at investment holding company SoftBank, said one of his biggest concerns is how to protect private company data from supply chain attacks in the age of AI–that is, dealing with risks from third-party vendors that have added generative AI features to their tools but have not implemented the necessary governance around the use of Softbank’s data. 

“There are good solid vendors…coming up with their own generative AI piece that’s now available with this tool you’ve been using for the last three years,” he said. “That’s cool, but what is it doing with the data? What is the data interacting with?” Organizations need to ask these questions as through they are quizzing a teenager who wants to download apps onto their smartphone, he added. 

“You have to be a little paranoid,” he said, adding that a company can’t “just open up the gate and let 1000s of apps come in and data just goes flying everywhere that’s totally unmanned.” 

Adam Zoller, CISO at Providence Health & Services, a not-for-profit healthcare system headquartered in Renton, Wash., agreed that protecting company data and systems while using third-party AI tools is his biggest security headache right now, particularly in a highly-regulated industry like healthcare. Suppliers may integrate LLMs into existing healthcare software platforms or biomedical devices and may not take security issues as seriously as they should, he explained. 

“Some of these capabilities are either deployed without our knowledge, like in the background as a software update,” he said, adding that he often has to have a conversation with business leaders, letting them know that using certain tools creates “an unacceptable risk.” 

Other security leaders are particularly worried about how current attacks are quickly evolving. For example, while deepfakes are already convincing, Alissa Abdullah, deputy CSO at Mastercard, said she was very concerned about new deepfake scams that are likely to emerge over the coming year. These would use AI video and audio not to pretend to be someone recognizable to the user, but a stranger from a trusted brand – a favorite company’s help desk representative, for example. 

“They will call you and say, ‘we need to authenticate you into our system,’ and ask for $20 to remove the ‘fraud alert’ that was on my account,” she said. “No longer is it wanting $20 billion in Bitcoin, but $20 from 1000 people – small amounts that even people like my mother would be happy to say ‘let me just give it to you.’” 

The exponential upward curve of AI capabilities

For CISOs at companies developing the most advanced AI models, planning for future risks is essential. Jason Clinton, chief information security officer at Anthropic, spoke at the Team8 event, emphasizing to the group that it’s the consequences of “the scaling law hypothesis” that worry him the most. This hypothesis suggests that increasing the size of an AI model, the amount of data it is fed, and the computing power used to train the model necessarily leads to a consistent and, to some extent, predictable increase in the model’s capabilities.

“I don’t think that [the CISOs] fully internalized this,” he said of understanding the exponential upward curve of AI capabilities. “If you’re trying to plan for an enterprise strategy for cyber that just is based on what exists today, then you’re going to be behind,” he said. “A year from now, it’s going to be 4x year over year increase in computing power.”

That said, Clinton said that he is “cautiously optimistic” that improvements in AI used by defenders to respond to AI-powered attacks. “I do think we have a defenders advantage, and so there’s not really a need to be pessimistic,” he said. “We are finding vulnerabilities faster than any attacker that I’m aware of.” In addition, the recent DARPA AI Cyber Challenge showed that developers could create new generative AI systems to safeguard critical software that undergirds everything from financial systems and hospitals to public utilities. 

“The economics and the investment and the technologies seem to be favoring folks who are trying to do the right thing on the defender side,” he said. 

An AI ‘cold war’

Softbank’s Hayslip agreed that defenders can stay ahead of AI-powered attacks on companies, calling it a kind of ‘cold war.’

“You’ve got the criminal entities moving very quickly, using AI to come up with new types of threats and methodologies to make money,” he said. “That, in turn, pushes back on us with the breaches and the incidents that we have, which pushes us to develop new technologies.” 

The good news is, he said, that while a year ago there were only a couple of startups focused on monitoring generative AI tools or providing security against AI attackers, this year there were dozens. “I can’t even imagine what I’ll see next year,” he said. 

But companies have their work cut out for them, as the threats are definitely escalating, he said, adding that security leaders cannot hide from what is coming.

“I know that there is a camp of CISOs that want to scream, and they’re trying to stop it or slow it down,” he said. “In a way it’s like a tidal wave and whether they like it or not, it’s coming hard, because [AI threats] are maturing and growing that fast.” 

source

Leave a Reply

Your email address will not be published. Required fields are marked *