Anthropic launches code review tool to check flood of AI-generated code

TechCrunch
Anthropic released Code Review, an AI tool integrated with Claude Code, to efficiently review the increasing volume of AI-generated code and identify potential bugs.

Summary

Anthropic has launched Code Review, a new AI-powered tool designed to address the challenges of reviewing the rapidly increasing amount of code generated by AI tools like Claude Code. The tool integrates with GitHub and automatically analyzes pull requests, providing developers with detailed feedback on potential logical errors, security vulnerabilities, and suggested fixes. Code Review utilizes a multi-agent architecture to examine code from multiple perspectives, prioritizing issues based on severity (red, yellow, purple). While resource-intensive and priced on a token basis (estimated $15-$25 per review), Anthropic believes Code Review is essential for enterprises leveraging AI for code generation, enabling faster development cycles and fewer bugs. The launch comes as Anthropic faces legal challenges regarding its designation as a supply chain risk and continues to see significant growth in its enterprise business, with Claude Code’s revenue exceeding $2.5 billion.

(Source:TechCrunch)