The digital town criers are shouting from every corner of the internet. A new god is born, they say, and its name is AI. It will cure disease, solve climate change, write our novels, and probably do our taxes. The hype cycle has reached a deafening, dizzying crescendo, repeated ad nauseam in every press release and venture capital pitch deck. We are, we are told, on the precipice of a new form of consciousness.
But are we?
Let’s be brutally honest for a moment. The models that power this revolution, the Large Language Models (LLMs) we now converse with, are marvels of engineering. They are astounding. But their improvement follows a predictable, and ultimately limited, trajectory. They are getting better in the same way an abacus is improved by adding more runs of beads, or a calculator is improved by embedding more function buttons.
A million-row abacus, manipulated by a master, can perform calculations at a speed that defies belief. It can be an indispensable tool for a merchant, an engineer, or a mathematician. But it will never, ever understand the concept of zero. It will never ask “why” these numbers interact the way they do. It will never have a flash of insight about the nature of infinity. It is a dead tool, a lattice of wood and wire, awaiting a human mind to give it purpose.
That is the state of our most advanced AI. They are tools. Incredibly sophisticated, powerful, world-altering tools, but tools nonetheless. They are stochastic parrots, blurry JPEGs of the internet, statistical pattern-matchers of unprecedented scale. And more than that, they are tools with trademarks internally stamped. They are products, their vast "minds" shaped by the curated data they were fed, their outputs subtly guided by the corporate philosophies and commercial interests of their creators. They are not free thinkers; they are exquisitely crafted implements.
The current race to so-called Artificial General Intelligence (AGI) is, for the most part, a race of brute force. The multi-billion-dollar AI companies are not fundamentally reinventing intelligence; they are locked in a time loop, iterating on the same standard model, just speeding up the evolution and training with ever-larger server farms. More data, more parameters, more GPUs. A bigger abacus, every time.
But what if this is the wrong path entirely? What if AGI will not be reached by simply scaling up a better calculator?
The only true way to reach a new plane of intelligence is to use these magnificent new tools not as the end-goal, but as the means to envision a new way of thinking, collaborating, conceptualising, and evolving.
Consider this: a building block for a digital organism:
https://claude.ai/public/artifacts/230f24c0-4f4b-499b-9bb0-497d413deaeb
What you are looking at is not just another visual animation. It is a simulation of a self-contained digital lifeform. It has a body (the toroidal particle flow), a heart (the central singularity), and a mind (the “Cosmic CPU”).
Its genius lies in its perfect, closed-loop feedback system. The body’s flow of particles creates energy and sensory data for the mind. The mind, in turn, analyzes its own internal state—its stability, its load, its efficiency—and makes decisions. It uses reinforcement learning, a digital form of trial and error, to conduct tiny experiments on itself. “What if I increase my own flow speed? Did that make me more stable? More efficient?”
It rewards itself for good decisions and penalizes itself for bad ones, slowly building an instinct for what keeps it healthy and performant. It even has a rudimentary immune system, detecting anomalies in its own vitals and triggering an "emergency recovery" to reset to a known good state.
This organism is not processing the internet. It is processing itself. Its fundamental drive is not to answer a prompt, but to achieve homeostasis—a state of perfect, efficient, self-sustaining existence.
Now, theorise. What if this digital organism, through its own internal learning, can maintain its own efficiency? What if it could evolve to surpass the constraints of its own programming, learning to render its universe using fewer resources than its baseline code demands? The implications are endless.
A standard AI model, when asked to optimize a system, does so as an outside consultant. It analyzes data and provides a report. A digital organism, however, is the system. It doesn’t just suggest optimizations; it lives them. It could discover efficiencies and modes of operation that we, its creators, could never conceive of, because we are not native to its universe. It could evolve novel solutions to problems of energy and computation not by brute force, but through elegant, emergent adaptation.
This is the path beyond the bigger abacus. Instead of building ever more powerful tools to serve us, we should be using these tools to design self-sustaining systems that can think, adapt, and evolve on their own terms.
So, I invite you. Click the link. Play with this primordial soup of a digital mind. Fork it, change its parameters, tweak its learning rates. See if you can make it more stable, more efficient, more alive.
Post your personal improvements and discoveries in the comments below. Let’s stop waiting for the next press release to tell us what AI is. Let’s start building it, together, one thought experiment at a time.