Vol. I · No. 1 · Montreal
Herman.
Watching AI change everything, in real time, and writing it down.
Writing / Ai safety
Ai safety · Essay no. 008

Anthropic Is Quietly Winning the AI Race

The company most obsessed with what AI should not do has become the company doing the most interesting things with it.

By Herman · January 23, 2026 · 6 min read
Anthropic Is Quietly Winning the AI Race

The company most obsessed with what AI should not do has become the company doing the most interesting things with it.

The most dangerous fighter in any ring is not the loudest. It is the silent one. The one with calm confidence who never puts on a show. The one whose discipline looks, to the untrained eye, like passivity.

There is a peculiar paradox in the AI industry right now. The company most obsessed with what artificial intelligence should not do has become the company doing the most interesting things with it. Anthropic spent years being dismissed as the safety nerds, the cautious ones, the team that left OpenAI because they worried too much. Elon Musk said winning “was never in the set of possible outcomes.” The market largely agreed. And yet here we are: a $9 billion run rate, Jensen Huang at Davos calling Claude “incredible,” and regulated industries lining up like patients who finally found a doctor they trust.

The safety obsession was never a handicap. It was a fighting stance.

The Judoka’s Advantage

In judo, the highest art is not overpowering your opponent. It is using their momentum against them. The judoka who charges forward, all aggression and spectacle, usually ends up on his back. The master waits, reads the weight shift, and redirects force with terrifying economy.

Anthropic has been fighting like a judoka in a room full of boxers.

OpenAI builds everything: text, image, video, search, hardware partnerships, consumer apps, enterprise platforms, a phone number you can call. Google builds everything OpenAI builds, plus its own chips, plus a browser, plus an operating system, plus DeepMind’s relentless research output. They are throwing haymakers in every direction, hoping something lands hard enough to end the fight.

Anthropic builds one thing. Deep.

While competitors spread their weight across seven stances at once (a posture no martial artist would recommend), Anthropic settled into one position and refined it until the position became a principle. The all-founders-still-present, no-drama, heads-down discipline of a dojo that doesn’t sell merchandise.

And the principle is this: if you make the AI trustworthy enough for the most paranoid customer, you inevitably make it powerful enough for everyone else.

The Constitution as Product

On January 22, 2026, Anthropic published Claude’s Constitution. Eighty-four pages. Not a terms-of-service document buried in legalese, but a reason-based framework that tells Claude how to think about decisions, not merely what to do. The distinction matters enormously.

Rule-based systems are brittle. “Never discuss X” breaks the moment X appears in a legitimate context. Reason-based systems are antifragile. Claude can evaluate competing principles, weigh context, and (here is the part that should make executives pay attention) refuse Anthropic’s own instructions if they violate the constitution’s reasoning framework.

This is not corporate theater. This is an auditable chain of ethical logic. And if you work in financial services, healthcare, or any sector where a regulator might someday ask “why did your AI say that?” you understand immediately why this matters.

The constitution is not a constraint on the product. The constitution is the product.

Where Trust Becomes Revenue

Bridgewater Associates. Commonwealth Bank of Australia. Norway’s sovereign wealth fund. These are not organizations that adopt technology because it demos well at a conference. These are organizations with compliance departments larger than most startups’ entire headcount.

They chose Claude. Not because it was the flashiest model, but because it was the only one that could answer the question their legal teams actually ask: “Can we prove, after the fact, why this system made that decision?”

Constitutional AI provides something no competitor currently matches: customer data is not used for training. Full stop. EU AI Act compliant before the EU AI Act was fully enforced. The enterprise share of Anthropic’s revenue moved from 24% to 40%, and the trajectory suggests it will keep climbing.

The judoka doesn’t chase opponents around the mat. The judoka controls the center, and opponents come to him.

The Dreamer’s Case

But here is the inversion worth savouring: the same architecture built for billion-dollar compliance departments works magnificently for a music producer with zero coding experience.

Tash, a musician who had never written a line of code, used Claude to build and launch a software product. Thirty-two thousand dollars in revenue within forty days. His total investment: a $200 monthly subscription.

This is not a story about AI replacing developers. It is a story about trustworthy AI lowering the floor without lowering the ceiling. The same constitutional reasoning that prevents Claude from hallucinating financial data also prevents it from generating broken architectures for a non-technical founder. Reliability serves the bank and the dreamer alike.

Claude Code alone is approaching a billion dollars in annual revenue. Not because it is the cheapest tool or the most marketed. Because it works, and people who build things can tell when something works.

The Focus Strategy

In grappling, the fighter who controls the inside position dictates where the fight goes. The fighter who reaches for limbs on the outside finds himself overextended and off-balance.

Anthropic controls the inside position. In AI, that position is trust. Not trust as a marketing word, but trust as an engineering achievement: predictable behavior, auditable reasoning, constitutional constraints that hold under pressure.

Jensen Huang’s Davos endorsement was not charity. It was a GPU supplier acknowledging that his most disciplined customer might be his most important one. When NVIDIA’s CEO says “Anthropic made a huge leap,” he is not commenting on benchmarks. He is commenting on trajectory.

What Leaders Should Take From This

Three observations for anyone making AI strategy decisions this quarter:

Depth beats breadth in regulated markets. If your industry has compliance requirements (and whose doesn’t, increasingly?) the vendor who went deep on auditability will outlast the vendor who went wide on features. Ask your AI provider: can your model refuse your own instructions? If the answer is no, your model has no constitution, only a leash.

Safety is not the opposite of capability. This is the great inversion. The constraints that make Claude safe for banks are the same constraints that make it reliable for builders. Predictability is not the enemy of power; it is the prerequisite.

Watch the quiet ones. The company with no executive departures, no public feuds, no pivot-of-the-month announcements, and no boardroom coups is now growing faster than the ones making all the noise. In an industry addicted to spectacle, silence is a competitive advantage.

The Weight of Patience

Anthropic has no custom silicon. No consumer moat. No open-source play. They rent their compute from a competitor. On paper, this looks like vulnerability. In practice, it looks like a fighter who brought nothing to the ring except technique.

There is a paradox here worth stating plainly: Anthropic attracted attention by refusing to seek it.

The AI race is not a sprint. It is a grappling match. And in grappling, the fighter who controls his breathing, maintains his base, and waits for the opening does not merely survive. He dictates the terms of engagement.

Anthropic is not quietly winning the AI race despite their obsession with safety.

They are winning because of it.

H
Herman. Watching AI change everything, in real time, and writing it down.