Quick takeaways
|
Anthropic just dropped a new model. It’s better than the last one on paper, and a chunk of its most loyal users are still furious about the one that came before it.
That’s where things stand with Claude Opus 4.7, which launched this week. The model itself is a genuine step forward. The timing, though, could not be more complicated. Between a major outage on April 15, growing user revolt over perceived performance cuts, and questions about whether Anthropic has been straight with its community, this is one of the messier product launches the company has had.
What Claude Opus 4.7 actually brings
Anthropic’s own description of Claude Opus 4.7 is that it’s their best generally available model yet. That’s a high bar, and based on what they’ve published, it mostly holds up. The model shows meaningful gains over Opus 4.6 in agentic coding, multidisciplinary reasoning, scaled tool use, and agentic computer use benchmarks.
It’s available everywhere you’d expect: claude.ai, the Anthropic API, Microsoft Azure, Google Cloud, and Amazon Web Services. Same price as Opus 4.6 across all of those. For most users, switching costs nothing.
One notable design choice: Anthropic deliberately worked to reduce the model’s cybersecurity capabilities during training. If you’re a security professional with legitimate research needs, they’ve set up a formal verification program to apply through. Everyone else gets a model that’s intentionally less capable in that direction, even as it’s stronger in most others. Anthropic’s announcement page has the full technical breakdown.
Claude Opus 4.7 by the numbers | |||||
|
|
| |||
The controversy Anthropic can’t shake
Here’s the part that makes the launch awkward to celebrate.
For the past few weeks, heavy users across Reddit, GitHub, and X have been posting side-by-side outputs, bug reports, and informal benchmarks claiming that Claude suddenly isn’t as sharp as it was. An AMD senior director put it bluntly in a widely shared post: “Claude has regressed to the point it cannot be trusted to perform complex engineering.”
Most of the speculation landed on one explanation: Anthropic quietly reduced the model’s default “thinking depth” to cut compute costs. Users started calling it “nerfed.” And as Axios reported, there’s also speculation that Anthropic may be routing more compute toward its experimental Mythos model, leaving less for standard Claude users.
Anthropic confirmed they made configuration changes that reduced the default effort level. They’ve pushed back on the more extreme version of the story, saying the “secret nerfing” narrative overstates what actually happened. There’s also a real secondary explanation worth considering: users may have simply acclimated to what previously felt impressive. Over time, expectations rise and small flaws become more visible. That’s a genuine phenomenon, and it probably explains some of what people are experiencing.
But perception is the actual problem here. Anthropic has built its brand on being the trustworthy AI company. The one that’s transparent. The one that doesn’t play games with its users. Quietly changing how hard the model thinks on each request, without telling anyone, sits badly with that positioning. Especially for a company reportedly heading toward an IPO at a $380 billion valuation.
📚 In plain English Anthropic quietly made Claude think less hard on each prompt to save on compute costs. They did not tell users. That is the core of the controversy. |
Claude Code got a real makeover
Separate from the model drama, Claude Code had a solid week on the product side.
Anthropic rebuilt the desktop experience around parallel sessions. There’s now a sidebar showing every active and recent coding session at once, with filters by status, project, or environment. A keyboard shortcut (Command + semicolon) lets you branch a question off a running task without sending extra context back into the main thread. That alone is a meaningful quality-of-life improvement for anyone who regularly context-switches between projects.
They also added an integrated terminal, an in-app file editor, a rebuilt diff viewer designed for large changesets, and an expanded preview pane that handles HTML files, PDFs, and local app servers. Each pane is drag-and-drop, so you can arrange the workspace however you want.
The biggest new addition is Routines. A Routine is a saved Claude Code configuration: a prompt, one or more repositories, and a set of connectors. Set it up once and it runs automatically on Anthropic’s cloud, even after you shut your laptop. Pro users get five Routines per day, Max users get 15, and Team and Enterprise users get 25.
Worth being clear about what Routines are and aren’t: they’re not full AI agents. Agents maintain ongoing state and chain decisions across interactions. Routines run, complete their task, and stop. Think of them as dynamic cron jobs with model reasoning baked in, which is genuinely useful for things like scanning CI/CD output after a deployment and posting a summary, or triaging alert messages on a schedule.
Claude Code: before vs after the April 2026 update | |||
|
| ||
📋 Quick note Claude Code Routines run on Anthropic’s own cloud infrastructure, so they keep going after you close your laptop. They count against your usage limits: Pro users get 5 per day, Max gets 15, Team and Enterprise get 25. |
Common misconceptions about Claude Opus 4.7
“Opus 4.7 is a completely new model architecture.” It’s an iteration, not a ground-up rebuild. Anthropic describes it as an improvement over 4.6 but still “less broadly capable” than Claude Mythos Preview, their experimental frontier model that’s currently limited to select partners. The gains in Opus 4.7 are real, but they’re targeted improvements in specific areas, not a sweeping generational jump.
“The performance complaints are just users overreacting.” Anthropic confirmed they made configuration changes that reduced default thinking depth. That’s a real change. Whether it crosses the threshold of “regression” depends heavily on your use case. For simple conversational prompts, probably not noticeable. For complex multi-step engineering workflows, the complaints are more understandable.
“Claude Code Routines are the same as AI agents.” They’re not. The key difference is state. Agents maintain memory and continuity across multiple interactions. Routines are triggered, execute a defined task, and end. More powerful than a script, less autonomous than a true agent.
“Opus 4.7 is only accessible through the API.” It’s available on claude.ai for regular users too, not just developers hitting the API directly. Same model, same capabilities, across all of Anthropic’s surfaces.
So what does this actually mean for you
If you’re a developer using Claude for coding work, Opus 4.7 is worth switching to. The benchmark gains on agentic tasks are real, and the price hasn’t moved. No friction to upgrading.
If you’ve been frustrated by the performance issues over the past few weeks, the official position is that the configuration change has been addressed. Whether you believe that probably depends on how much trust has eroded for you personally. Fair enough either way.
And if you’re still trying to figure out where Claude fits in the broader AI landscape, how Claude performs against ChatGPT and Gemini is a useful reference point before committing to any one tool for your workflow.
Claude Opus 4.7 is a real product improvement. It just arrived during one of Anthropic’s rougher patches. The model is better; the relationship with power users needs more work than a single launch announcement can fix. Watch how Anthropic handles the next few weeks. That’s what will actually matter for the company’s reputation, heading into what looks like an IPO year.
For the full technical specs on Claude Opus 4.7, CNBC’s coverage has a solid breakdown of the benchmark comparisons.

