Anthropic has launched Code Review in Claude Code, a new multi-agent system that tries to catch bugs before a human reviewer even sees the code. Available now for Claude Teams and Enterprise users in the Claude Code web interface, Code Review is a feature that admins can turn on per repository. It will run in the cloud whenever a pull request is opened for an enabled repository. Anthropic already offered code review with Claude Code in GitHub Actions.
Cat Wu, the head of product for Claude Code at Anthropic, notes how important some degree of automation for code review has become.
“As people adopt Claude Code, we’ve been noticing that people are writing a lot more PRs than they used to,” she says in an interview with The New Stack. “What that often means is the burden shifts onto the code reviewer because it only takes one engineer, one prompt, to put out a plausible-looking PR. And then the code reviewer needs to spend a bunch of time verifying all the edge cases,” he said.
Code Review will dispatch a team of agents who work in parallel, each looking for different types of errors. Once done, they’ll leave a comment with their conclusions and suggest a solution if they find any issues. The agents will not approve any pull requests, though. That’s still the human engineer’s call.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



