Anthropic’s Claude Code gets ‘safer’ auto mode
Summary
Anthropic has released an "auto mode" for its Claude Code tool, allowing the AI to make permissions-level decisions on behalf of users. This feature is intended to provide a safer middle ground between constant user oversight and granting the model excessive autonomy. While Claude Code can act independently, this capability carries risks such as deleting files, sending sensitive data, or executing malicious code. The new auto mode mitigates these risks by flagging and blocking potentially dangerous actions, giving the agent a chance to retry or request user intervention. Currently, auto mode is available as a research preview for Team plan users, with expansion to Enterprise and API users expected soon. Anthropic cautions that the tool is experimental and does not eliminate all risk, advising developers to use it in isolated environments.
(Source:The Verge)