The world of software development relies on an implicit principle: complexity is a cause of protection. We have long assumed that if a system is sufficiently vast and convoluted, the cost of finding a critical flaw would remain prohibitive for almost...
In my previous article, I explored the idea of an AI researcher that reads what matters — not everything. Today, let's zoom out and address a deeper issue.
We get the impression that more data = better AI, and that it will become more performant. Eac...
In the previous articles, we laid the groundwork. We talked about an AI that doesn't know what it doesn't know. An AI that lacks doubt — that ability to stop and say "wait, I'm not sure." We talked about an AI that reproduces code patterns without un...
Let's set the debate: on one side, those for whom AI writes bad code. On the other, those for whom AI has revolutionized their workflow, they move ten times faster, the question is settled.
Both sides are potentially asking the wrong question.
Th...
In the two previous articles, we identified two problems. The first: LLMs don't always know when they're wrong — they generate, invent, fill the gap, sometimes without the slightest warning signal. The second: in the physical world, this behavior bec...
There's something strange about the way we talk about artificial intelligence right now. We keep hearing that models will soon "take control" of physical machines — robots, self-driving cars, industrial drones. As if AI only needed a little more comp...
Today's AI doesn't "think" — it operates as a statistical estimation engine. It projects concepts into a complex vector space, where each word is a coordinate. The model learns to estimate the continuation of a sequence by calculating probabilities f...