Claude now reviews code.

Paul Krill
2 Min Read

During its early access phase, Code Review deploys a group of automated agents to concurrently identify software flaws, confirm their validity to eliminate false positives, and categorize them by their level of seriousness.

Credit: iStock/SeventyFour

Anthropic announced the integration of Code Review into Claude Code, a novel capability designed to conduct thorough, multi-agent code assessments that uncover defects often overlooked by human reviewers.

Launched on March 9, Code Review is currently accessible as a research preview for users of Claude for Teams and Claude for Enterprises. Upon a pull request, Code Review orchestrates a collective of agents to simultaneously hunt for bugs, validate them to weed out inaccurate alerts, and assign priority rankings, as stated by Anthropic. The output is presented within the pull request as one concise, high-value summary comment, complemented by in-line notes for particular issues. Anthropic reports that a typical review requires approximately 20 minutes.

Anthropic has utilized Code Review internally for several months. For substantial pull requests (exceeding 1,000 lines of modification), 84% reveal findings, averaging 7.5 problems. In contrast, for minor pull requests under 50 lines, the detection rate falls to 31%, with an average of 0.5 issues. Anthropic observed that its engineers largely concur with Code Review’s discoveries, with less than 1% of identified issues being marked as erroneous.

Artificial IntelligenceGenerative AISoftware DevelopmentDevelopment Tools
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *