Friday, November 22, 2024
Technology

DARPA launches two-year competition to build AI-powered cyber defenses

As a part of an ongoing White House initiative to make software more secure, the Defense Advanced Research Projects Agency (DARPA) plans to launch a two-year contest, the AI Cyber Challenge, that’ll task competitors with identifying and fixing software vulnerabilities using AI.

In collaboration with AI startups Anthropic and OpenAI, as well as Microsoft and Google, the AI Cyber Challenge will have U.S.-based teams compete to best secure “vital software” — specifically critical infrastructure code — using AI. With the Linux Foundation’s Open Source Security Foundation (OSC) serving as a challenge advisor, $18.5 million in prizes will be awarded to the top contestants.

DARPA says that it’ll also make available $1 million each to up to seven small businesses who wish to participate.

“We want to create systems that can automatically defend any kind of software from attack,” DARPA program manager Perry Adams, who conceived of the AI Cyber Challenge, told reporters in a press briefing yesterday. “The recent gains in AI, when used responsibly, have remarkable potential for securing our code, I think.”

Adams noted that open source code is increasingly being used in critical software. A recent GitHub survey shows that a whopping 97% of apps leverage open source code, and that 90% of companies are applying or using open source code in some way. 

The proliferation of open source has led to an explosion of innovation. But it’s also opened the door to damaging new vulnerabilities and exploits. A 2023 analysis from Synopsys found that 84% of codebases contained at least one known open source vulnerability, and that 91% had outdated versions of open source components.

In 2022, the number of supply chain attacks — attacks on third-party, typically open source components of a larger codebase — increased 633% year-over-year, a Sonatype study found.

In the wake of high-profile incidents like the Colonial Pipeline ransomware attack that shut down gas and oil deliveries throughout the Southeastern United States and the SolarWinds supply chain attack, last year, the Biden-Harris Administration issued an executive order to improve software supply chain security, creating a cybersecurity safety review board to analyze cyberattacks and make recommendations for future protections. And in May 2022, the White House joined The Open Source Security Foundation and Linux Foundation in calling for $150 million in funding over two years to fix outstanding open source security problems.

But with the launch of the AI Cyber Challenge, the Biden Administration evidently believes that AI has a larger role to play in cyberdefense.

“The AI Cyber Challenge is a chance to explore what’s possible when experts in cybersecurity and AI have access to a suite of cross-company resources of combined, unprecedented caliber,” Adams said. “If we’re successful, I hope to see the AI Cyber Challenge not only produce the next generation of cybersecurity tools in this space, but show how AI can be used to better society by here defending its critical underpinnings.”

While much has been written about AI’s potential to aid in cyberattacks —  by generating malicious code, for example — some experts believe that AI advances could help to strengthen organizations’ cyber defenses by enabling security professionals to perform security tasks more efficiently. According to a Kroll poll of global business leaders, over half say that they’re now using AI in their latest cybersecurity efforts.

Teams in the AI Cyber Challenge will participate in a qualifying event in Spring 2024, and the top scorers — up to 20 — will be invited to a semifinal competition at the annual DEF CON conference in 2024. Up to five teams will receive $2 million prizes and continue to the final phase of the competition, to be held at DEF CON 2025. And the top three in the last round will receive additional prizes, with the first-place winner receiving $4 million.

All of the winners be asked — but not required — to open source their AI systems.

The AI Cyber Challenge builds on the White House’s previously announced model assessment at this year’s EF CON, which aims to identify the ways in which large language models along the lines of OpenAI’s ChatGPT can be exploited for malicious intent — and, with any luck, arrive at fixes for those exploits. The assessment will measure, in addition, how the models align with the principles and practices recently outlined in the Biden-Harris administration’s blueprint for an “AI bill of rights” and the National Institute of Standards and Technology’s AI risk management framework.

source

Leave a Reply

Your email address will not be published. Required fields are marked *