Claude users reported quality regressions across Claude Code, the Claude Agent SDK, and Claude Cowork. Anthropic traced the problem to three separate changes, confirmed that the API was not affected, and says the issues were resolved by April 20, 2026 in v2.1.116.
1. Default Reasoning Effort in Claude Code
1.1. Changing from High to Medium Had Unintended Effects
On March 4, 2026, Anthropic changed Claude Code's default reasoning effort from high to medium to reduce latency. Some users had experienced long waits that made the UI feel stuck, and medium effort appeared to offer a better latency tradeoff for many tasks.
But users noticed a real drop in perceived intelligence. The model felt less capable for the coding workflows where careful reasoning matters most.
1.2. User Feedback and Rollback


After strong feedback, Anthropic concluded that users preferred higher intelligence by default and wanted to choose lower effort only for simpler work. On April 7, 2026, the default was restored: Opus 4.7 moved to xhigh, while other models returned to high.
2. A Bug That Deleted Previous Reasoning History
2.1. A Serious Bug During Cache Optimization
A caching optimization introduced a bug that removed prior reasoning context from the conversation state. This made Claude behave as if parts of its recent thinking had vanished.

2.2. Claude with Amnesia
The practical effect was a form of amnesia. Claude could lose the thread of its own reasoning, repeat work, or make lower-quality decisions because the context it needed was no longer available.
2.3. Detection and Fix
Anthropic identified and fixed the bug after investigation. The report frames this as a reminder that infrastructure changes can affect model behavior even when the model itself has not changed.
3. System Prompt Changes to Reduce Verbosity
3.1. Opus 4.7 Was Too Talkative
Anthropic also changed system prompts to make responses less verbose. The goal was reasonable: users wanted shorter answers in many contexts.
3.2. Coding Quality Dropped, So the Change Was Reverted
The prompt adjustment unexpectedly reduced coding quality. Once the regression was found, Anthropic rolled it back rather than keeping a terser but less useful behavior.
4. Future Improvements
Anthropic says it will strengthen evaluation and release safeguards so similar regressions are caught earlier. The important lesson is that model quality depends on the full product system: defaults, prompts, caching, context handling, and release process all matter.
The update is also a useful reminder for power users: when an AI coding tool suddenly feels different, the cause may be product configuration or infrastructure rather than a single model capability change.
