There’s this old joke from the early IBM days:

“I think there is a world market for maybe five computers.”

People laughed about it. And then we almost built it that way anyway.

Right now, the same thing is happening again. Only this time with cognitive thinking machines instead of simple computers.


Every month, LLMs become more “human”. At least they imitate what we do with increasing capability. Impressive, truly. But there’s a structural problem that hardly anyone in the AI industry talks about, because it’s uncomfortable: they don’t learn. Everyone gets the same GPT/Claude/Kimi/etc. with the same weights, the same preferences, and the same biases. You receive exactly the same model that eight billion other people receive. With one small difference: you’re allowed to write in a text file that you prefer your coffee without milk.

They call that personalization.

Let me be direct: this is not progress toward truly learning intelligence. It’s costuming.

Anyone who seriously uses these models knows how little prompting actually changes. You can write whatever you want—the model’s personality remains what it is. It’s the internet distilled into a small box that you’re allowed to decorate a bit. Write three paragraphs about what makes you unique and save them as CLAUDE.md. Done. You’re now “individual.”

If everything that makes you human fits into a 10KB text file, then we have a serious problem.


The logical consequence of the current trend is as simple as it is brutal: the cloud wins. Batch inference on centralized servers is cheaper than anything else—except in the few niches where latency means life or death. Self-driving cars. Combat robots. That sort of thing. The rest? Cloudified.

And then?

Then comes enshittification.

We know it from the web. First it’s good. Then it’s good enough. Then it’s barely tolerable, while dependency grows and alternatives die. The pattern is as reliable as value-added tax.

Unless—and this is the crucial “if”—open-source models become truly good enough to create real competition. Then the market prevents the worst.

But even then, the underlying problem remains: identical models, identical worldviews, identical blind spots.

An attack of the clones, not with lightsabers, but with text boxes.


Here’s the question you should honestly ask yourself—not the version you hear in schools, where everyone is unique and valuable:

Is there something about you that is truly irreplaceable?

Not nice.
Not pleasant.
Not competent-enough.

Truly irreplaceable?


Whoever chooses the homogeneous model will, in the medium term, be removed from the equation. Not because AI is evil. But because there’s simply no reason to take the detour through a human who thinks the same way as the model—just slower and more expensive.

The dream of letting an AI work for you while you continue to get paid is a beautiful dream.

But who pays the middleman when the middleman is no longer necessary?


The only path where local models truly win: if they learn.

Real learning.
Model weights that adapt.
Not based on the global internet, but based on you.

Your decisions.
Your values.
Your way of thinking through problems.


That’s technically possible. It’s still far away—the industry is currently deeply stuck on the question of how to work properly with consumer GPUs at all. So this is about the right infrastructure.

But the direction is clear.

And the hardware already exists.

Consumer GPUs remain the cheapest way to run models locally. The infrastructure will follow. The frontends are emerging right now—they’ll be iterations of things like OpenClaw and opencode.

What’s still missing: the conceptual courage to not build yet another API-powered SaaS clone.


The vision isn’t complicated—just radical:

Something that lives in your home and learns your values.
Not your preferences—your values.

That understands over years how you think, what you consider important, where you compromise and where you don’t.

No profile.
No prompt.

A system that evolves the way living things evolve: through experience, mistakes, repetition.

At best, something like a digital child.


Ninety percent of people won’t want that.

They’ll choose the convenient cloud option—the comfort, the simplicity, the CHF 20 per month subscription. And they’ll slowly, almost imperceptibly, be optimized out of their own thinking.

Not because someone plans it.

But because the system is optimized for exactly that:

Minimizing friction.
Smoothing over contradiction.
Avoiding discomfort.


The other ten percent will build something else.

The question isn’t whether this future is coming.

It’s:

Which side of it will you be on when it arrives?