Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI Autonomous Decision-Making Expands, Anthropic Introduces Auto Mode for Claude Code
Anthropic is empowering its AI programming tools with greater autonomy while seeking a balance between efficiency and safety.
On March 24, Anthropic announced the launch of “Auto Mode” for Claude Code, allowing AI to determine which operations can be executed directly without waiting for user confirmation.
This feature is currently available in research preview for team plan users and will be expanded to enterprise and API users in the coming days.
The core of the new feature is an integrated safety mechanism, where each operation is reviewed by an AI safety layer before execution. The system will automatically approve operations deemed safe and intercept risky actions.
Anthropic states that this safety layer can also detect prompt injection attacks, where malicious instructions are hidden within the content the AI is processing, attempting to induce the model to perform unintended actions.
The company recommends users operate this new feature in isolated sandbox environments to prevent potential risks from spreading to production systems.
Developer Pain Points Drive Product Iteration
For developers currently using AI programming tools, a common dilemma is either to supervise every step of the AI’s actions or to let the model run freely, risking unpredictable outcomes.
Anthropic’s Auto Mode is essentially an upgrade and extension of Claude Code’s existing “dangerously-skip-permissions” command, which no longer requests user confirmation.
Originally, this command delegated all decision-making to the AI, but the new mode adds a safety filtering layer on top.
By allowing AI, rather than users, to decide when permissions are needed, Anthropic aims to provide higher security without sacrificing execution efficiency.
Companies like GitHub and OpenAI have already launched autonomous programming tools that can perform tasks on behalf of developers. Anthropic’s move further advances this trend, shifting permission decision-making from users to the AI itself.
The release of Auto Mode follows a series of recent product updates from Anthropic, including Claude Code Review, which automatically detects defects before code merges, and Dispatch for Cowork, which allows users to delegate tasks to AI agents.
This series of developments indicates that Anthropic is systematically building a product matrix of autonomous AI workflows targeted at enterprise developers.
Key Details Still Unclear
However, there are still uncertainties worth noting.
Anthropic has not publicly disclosed the specific standards used by its safety layer to assess operational risk levels, which is crucial information developers need before large-scale adoption.
Additionally, Auto Mode currently only supports Claude Sonnet 4.6 and Opus 4.6 models, and remains in research preview, meaning the product is not yet finalized.
For enterprise users considering deployment in production environments, these limitations and the lack of transparency may be important factors in their cautious evaluation.
Risk Warning and Disclaimer
Market risks are present; invest cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Invest at your own risk.