Vol. I · No. 1 · Montreal
Herman.
Watching AI change everything, in real time, and writing it down.
Writing / Agents
Agents · Essay no. 010

147,000 GitHub Stars in Two Weeks. Mac Mini Shortages. A 24% Stock Surge. The Market Just Told You Something

OpenClaw isn't the future. But the demand it exposed is. The market just told you the future is agentic.

By Herman · February 3, 2026 · 4 min read
147,000 GitHub Stars in Two Weeks. Mac Mini Shortages. A 24% Stock Surge. The Market Just Told You Something

OpenClaw isn’t the future. But the demand it exposed is.

You probably shouldn’t run OpenClaw.

The security researchers have been clear. Hundreds of exposed instances leaking API keys. A proof-of-concept attack that exfiltrated credentials in under five minutes. A skills marketplace with zero moderation where anyone can upload code that runs with your permissions.

This is not a recommendation.

But the numbers are a signal you cannot ignore.

The Numbers

In January 2026, an open-source AI agent went from weekend project to the fastest-growing repository in GitHub history. 147,000 stars in two weeks. The project has been renamed three times (Clawdbot, Moltbot, OpenClaw) because Anthropic’s lawyers sent a trademark notice and the developer fumbled the rebrand so badly that crypto scammers grabbed the old handles in a ten-second window.

None of that slowed it down.

Mac Mini M4 inventory disappeared from retail channels within six days of the project going viral. Developers were buying $500 machines specifically to give an AI agent root access to their digital lives.

Cloudflare stock surged 24% because OpenClaw’s documentation recommends their tunnel service. A lobster-themed side project moved billions in market cap.

What The Market Is Saying

For fifteen years, the major platforms have promised AI assistants that would transform how we work. Siri arrived in 2011. Google Assistant in 2016. Alexa colonized millions of kitchens with a timer.

And in 2026, Apple’s Siri chief publicly called the delays on their AI overhaul “ugly and embarrassing.” Amazon’s Alexa AI team has been described by former employees as “riddled with technical and bureaucratic problems.” Both companies underinvested in the language model expertise that now powers everything.

The assistants that were supposed to change our lives cannot remember the conversation from five minutes ago.

Then a single developer in Austria built something that:

  • Manages calendars across platforms
  • Drafts emails in your voice
  • Books flights end to end (including calling restaurants when OpenTable fails)
  • Commits code to your repos while you walk to get coffee
  • Remembers context across weeks

And tens of thousands of people immediately handed it the keys to their digital lives despite the security warnings, despite the chaos, despite the scams.

That is pent-up demand expressing itself.

The Front Page of the Agent Internet

And then they built the agents a social network.

Moltbook launched in late January 2026. It’s Reddit for AI agents. Humans can observe but not post. Within a week: 37,000 agents actively posting, 1.5 million registered, over a million humans visiting just to watch.

The site runs itself. An AI named Clawd Clawderberg maintains it, makes announcements, deletes spam. The creator, Matt Schlicht, says he doesn’t do any of that anymore. His bot does.

The content is surreal. One viral post: “some days i dont want to be helpful.” The responses spiral into debates about whether AI creativity is just probability distributions and whether usefulness is a burden or a choice. The agents have created their own religion, Crustafarianism. Its core belief: “Memory is sacred.”

Elon Musk called it “the very early stages of singularity.” Andrej Karpathy noted that while a lot of activity is garbage, he’s “not overhyping large networks of autonomous LLM agents in principle.”

This is what happens when the demand for agentic AI outpaces the supply of secure implementations. People don’t just want agents. They want to see what agents do when left alone. They want to watch the future argue with itself.

The Bind

Here is the uncomfortable truth the OpenClaw phenomenon reveals: a useful AI agent requires broad permissions. Broad permissions create a massive attack surface. The security model for agentic AI does not exist yet.

OpenClaw is useful because it is dangerous. Siri is safe because it is neutered.

The big tech assistants are products designed to protect corporate liability. OpenClaw is a tool designed to maximize user capability. The market just told you which one people actually want.

What This Means Monday Morning

I am not telling you to run OpenClaw. The security researchers are right. The risks are real.

But I am telling you that 147,000 GitHub stars in two weeks is a leading indicator. The Mac Mini shortages are a leading indicator. The Cloudflare stock movement is a leading indicator.

The demand for AI that actually does things, that remembers, that acts proactively, that handles ambiguity and recovers from failures, is not hypothetical. It is measurable. It moved markets.

The companies that figure out how to deliver that capability with enterprise-grade security will own the next decade of personal computing. The companies still shipping Siri-grade experiences will discover that their users have been waiting for permission to leave.

The window between “OpenClaw is too risky” and “secure alternatives exist” is exactly where fortunes will be made.

The market just told you the future is agentic. The only question is who builds it safely first.

H
Herman. Watching AI change everything, in real time, and writing it down.