A few years ago, at a Protocol Labs dinner, I sat beside a founder building embodied AI. Somewhere in the middle of the meal, he asked me, “If you could live forever, would you?”
“No.” I said, “I want to die. Everything dies. I kind of think that's how it’s supposed to be.”
He looked at me like I’d said something tragic. But to me, that answer wasn’t dark at all. It was honest. And it’s stuck with me ever since.
Wanting to die (not in the sense of despair, but in the sense of limits) isn’t nihilism. It’s clarity. It’s a refusal to pretend that limitless continuity is the same thing as progress.
Which brings me to BitRobot.
(Read the full article here: https://www.protocol.ai/blog/bit-robot-and-the-future-of-embodied-ai/ )
Their architecture is smart. The idea of decentralized, self-improving embodied AI makes sense. But are we really trying to build AI that’ll move through the world, touch things, and train themselves on lived experience… where the goal is to outlive people or to replace fragility with permanence and chase scale until everything human becomes a legacy protocol 😮💨 Yikes. To what end?
We should stop pretending forever is the goal
This isn’t about fearing the tech. I’ve actually spent most of my adult life contributing to it’s growth. It’s about owning the tradeoffs. Open-source isn’t enough. Decentralization isn’t enough. It’s not just about how systems grow, it’s about how they end. What kind of power they accumulate. Who gets to walk away.
Resilience doesn’t mean making things last forever. It means making them end well. Not with a crash, not with a monopoly (or monarchy), but with continuity for the parts that matter and compost for the rest.
That’s true of people, and it should be true of systems.
The body doesn’t just make things harder, I think it makes things matter. It slows you down and it breaks and it needs repair. To me, that’s meaningful.
And if we can skip that somehow, avoid the risk and the drag and the recovery, then I don’t think it’s “embodied” at all.