Anthropic Adds Auto Security Reviews to Claude Code

Anthropic's new automated security features for Claude Code aim to keep pace with vulnerabilities as AI tools exponentially increase code production.

Share

Anthropic has launched automated security reviews for Claude Code, its command-line AI coding assistant, addressing growing concerns about maintaining code security as AI dramatically accelerates software development.

The new capabilities include a terminal-based security scanning command and automated GitHub pull request reviews, representing what Logan Graham, head of Anthropic’s frontier red team, calls “the starting point of helping developers make their code really, really secure without basically even trying.”

The centrepiece of the update is a new /security-review command that developers can run directly from their terminal before committing code. The feature scans for common vulnerabilities, including SQL injection, cross-site scripting (XSS), authentication flaws, insecure data handling and dependency vulnerabilities, Anthropic stated in a blog post.

“It’s literally like 10 keystrokes and you get basically a senior security engineer over your shoulder,” said Logan Graham, a member of Anthropic’s frontier red team who led development of the security features

“It should be basically effortless for somebody to make code that they are writing, or code that they’re writing with Claude extremely secure, or it should just automatically happen for them,” he added.

After identifying issues, developers can ask Claude Code to automatically implement fixes, keeping security reviews within what Graham calls the “inner development loop,” where problems are easiest and cheapest to address.

The second major feature is a GitHub Action that automatically reviews every pull request for security vulnerabilities. 

Once configured by security teams, the system automatically triggers on new pull requests, reviews code changes for security vulnerabilities, applies customizable rules to filter false positives, and posts comments inline on the pull requests with specific concerns and recommended fixes.

“This creates a consistent security review process across your entire team, ensuring no code reaches production without a baseline security review,” Anthropic stated in the blog post. “The action integrates with your existing CI/CD pipeline and can be customised to match your team’s security policies.”

Anthropic has been testing these features internally. Graham said the company has caught several production vulnerabilities before they shipped, including a remote code execution vulnerability exploitable through DNS rebinding in a local HTTP server, and an SSRF attack vulnerability in an internal credential management system.

“Since setting up the GitHub Action, this has already caught security vulnerabilities in our own code and prevented them from being shipped and reaching our users,” the recent blog post stated.

The security features address what Graham sees as an emerging crisis in software security. As AI tools become more prevalent, the volume of code being produced is exploding.

“Models are now writing an extreme amount of code,” Graham said. “I think it’s really possible over the next, like, year or two years, you end up 10x-ing or 100x-ing or 1,000x-ing the amount of code that exists in the world. The only way to keep up with that is through models.”

“This dramatic increase in code volume makes traditional human-led security reviews impractical at scale,” said Graham. 

“Right now, it requires humans to review everything to make sure it’s secure, and if we really want models to be working on the highest value things in the world, we need to figure out a way to make all the code that comes out just as secure and ideally much more.”

Beyond addressing scale, the features aim to democratize access to security expertise. Graham explained that the tools could benefit smaller development teams that lack dedicated security engineers or budgets for expensive security software.

“We’re democratizing security review to, you know, the one-person shop that is building something exciting and doesn’t have a security engineer, can’t pay for the license for the software,” said Graham. “They will probably get bigger, faster and more reliably if they start using these tools.”

The security features originated as an internal hackathon project at Anthropic, where the security team was building tools to maintain what Graham describes as “frontier-class security” for the AI company. 

When the tool began finding issues in Anthropic’s own code before release, the team decided to make it available to all Claude Code users.

“This started as a hackathon project. It already started finding like issues or flaws in our code before we were releasing it to ourselves,” explained Graham. “And we thought, this is super, super useful. This really aligned with the mission. Why don’t we just make it available to everybody in Claude Code?”

The security announcement continues Anthropic’s recent push to make Claude Code more enterprise-ready. In the past month alone, the company has shipped subagents, analytics dashboards for administrators, native Windows support, Hooks and multidirectory support.

This pace of innovation indicates that Anthropic’s broader ambition is to position Claude Code as essential infrastructure for development teams, moving beyond simple code generation to comprehensive development workflow integration.

The company is able to deliver technology at this velocity because: “Anthropic is insanely talent dense, and the stuff that we pull off for the size that we are is, like, honestly, very, very amazing today,” said Graham.

Graham said the security features are just the beginning of a transformation in how software development and security intersect with AI.

“The broad belief that we have is models, over time, will basically do everything in a very agentic way,” said Graham, suggesting that AI agents will increasingly handle complex, multistep development tasks autonomously.

Both the /security-review command and GitHub Action are available immediately to all Claude Code users. The terminal command requires updating to the latest version of Claude Code, while the GitHub Action requires manual setup following Anthropic’s documentation.

For developers and organisations grappling with the security implications of AI-accelerated development, these tools represent an early attempt to ensure that the benefits of AI coding assistance don’t come at the cost of application security.

Staff Writer
Staff Writer
The AI & Data Insider team works with a staff of in-house writers and industry experts.

Related

Unpack More