Back

aihackers.net

Why This Matters

8/28/2025Updated 2/12/2026
https://aihackers.net/posts/anthropic-tos-changes-2025/

## What changed in policy (late 2025) For consumer products (Claude Free, Pro, and Max, including Claude Code when used with those accounts), Anthropic introduced a **model-improvement setting** that controls whether chats and coding sessions can be used to improve Claude. If you enable that setting, Anthropic may retain your data (de-identified) **for up to 5 years** in training pipelines. Otherwise, deleted conversations are removed from backend systems within **30 days**. This applies to **new or resumed chats** after the setting is enabled; incognito chats are not used for model training. … ## What got blocked (Jan 2026 enforcement) Anthropic confirmed it tightened technical safeguards to prevent third-party tools from **spoofing the Claude Code client**. This severed access for tools that used consumer OAuth tokens outside official interfaces (e.g., OpenCode and similar “harnesses”). Anthropic also acknowledged some **false positives** that led to account bans, which it said were being reversed. ### The economic tension: Subscription arbitrage The enforcement action targets a fundamental economic mismatch. Anthropic’s $200/month Max subscription provides unlimited tokens through Claude Code, while the same usage via API would cost **$1,000+ per month** for heavy users. Third-party harnesses like OpenCode removed Claude Code’s artificial speed limits, enabling autonomous agents to execute high-intensity loops—coding, testing, and fixing errors overnight—that would be cost-prohibitive on metered plans. As one developer noted: “In a month of Claude Code, it’s easy to use so many LLM tokens that it would have cost you more than $1,000 if you’d paid via the API.” By blocking these harnesses, Anthropic forces high-volume automation toward metered API pricing or their controlled Claude Code environment. ### Precedent: This continues a pattern January 2026’s enforcement wasn’t isolated. Anthropic had previously blocked competitors from accessing Claude through similar technical and contractual means: **August 2025:**Anthropic revoked OpenAI’s access to the Claude API for benchmarking and safety testing—practices Anthropic flagged as competitive restrictions under their Terms of Service **June 2025:**Windsurf faced a sudden blackout when Anthropic “cut off nearly all of our first-party capacity” for Claude 3.x models with less than a week’s notice, forcing Windsurf to pivot to a “Bring-Your-Own-Key” model

Related Pain Points5

Account suspension without warning or appeals process

9

User accounts have been suspended without warning within minutes of deployment with vague 'fair use violation' emails. Appeals go unanswered for weeks, resulting in lost access to production sites with no recourse or clear explanation.

securityVercel

Claude Pro subscription OAuth tokens blocked in third-party tools

9

Anthropic restricted subscription OAuth tokens to work only with the official Claude Code CLI, blocking tools like OpenCode, Moltbot, and integrations in Cursor. Users who built workflows around third-party tools were locked out mid-project, forcing them to either downgrade subscriptions or abandon the platform entirely.

authClaudeOAuthClaude Code CLI

Subscription arbitrage forces high-volume users to expensive metered API pricing

7

Heavy usage of Claude Code through $200/month Max subscription (unlimited tokens) would cost $1,000+ monthly via API, creating economic pressure to use subscription workarounds, which Anthropic now blocks, leaving no cost-effective option for autonomous agent automation.

dependencyClaudeClaude Code

Cryptic access denial without explanation or recourse

7

Developers experience unexplained access rejections (e.g., "not allowed" to use Gemini Pro with CLI) despite having valid API keys and paying for the service. No reason is given and there is no documented recourse, creating frustration and blocking workflows.

authGemini CLIGemini API

Data retention policy requires explicit opt-out for model training usage

6

Anthropic retains user chat and coding session data (de-identified) for up to 5 years for model improvement unless explicitly disabled, creating a default opt-in model training practice that applies to new or resumed chats after enabling the setting.

securityClaude