GitHub’s latest AI tool can automatically fix code vulnerabilities
It’s a bad day for bugs. Earlier today, Sentry announced its AI Autofix feature for debugging production code and now, a few hours later, GitHub is launching the first beta of its code-scanning autofix feature for finding and fixing security vulnerabilities during the coding process. This new feature combines the real-time capabilities of GitHub’s Copilot with CodeQL, the company’s semantic code analysis engine. The company first previewed this capability last November.
GitHub promises that this new system can remediate more than two-thirds of the vulnerabilities it finds — often without the developers having to edit any code themselves. The company also promises that code scanning autofix will cover more than 90% of alert types in the languages it supports, which are currently JavaScript, Typescript, Java, and Python.
This new feature is now available for all GitHub Advanced Security (GHAS) customers.
“Just as GitHub Copilot relieves developers of tedious and repetitive tasks, code scanning autofix will help development teams reclaim time formerly spent on remediation,” GitHub writes in today’s announcement. “Security teams will also benefit from a reduced volume of everyday vulnerabilities, so they can focus on strategies to protect the business while keeping up with an accelerated pace of development.”
In the background, this new feature uses the CodeQL engine, GitHub’s semantic analysis engine to find vulnerabilities in code, even before it has been executed. The company made a first generation of CodeQL available to the public in late 2019 after it acquired the code analysis startup Semmle, where CodeQL was incubated. Over the years, it made a number of improvements to CodeQL, but one thing that never changed was that CodeQL was only available for free for researchers and open source developers.
Now CodeQL is at the center of this new tool, though GitHub also notes that it uses “a combination of heuristics and GitHub Copilot APIs” to suggest its fixes. To generate the fixes and their explanations, GitHub uses OpenAI’s GPT-4 model. And while GitHub is clearly confident enough to suggest that the vast majority of autofix suggestions will be correct, the company does note that “a small percentage of suggested fixes will reflect a significant misunderstanding of the codebase or the vulnerability.”