Despite the hype, the frontier of AI might not be where most people are looking.
While Big Tech continues to throw billions at scaling transformer-based LLMs, a recent Delphi study shows that 76% of leading AGI researchers no longer believe models like ChatGPT will meaningfully contribute to superintelligence. The marginal gains are becoming too expensive
GPT-5 is estimated to cost up to $1B to train, yet the improvement is mostly superficial. What we’re getting is better packaging, not better reasoning.
Meanwhile, a quiet shift is happening in the background. Research in alternative architectures is starting to show surprising results. Liquid Neural Networks, for example, are achieving task-specific performance far beyond what transformers deliver using a fraction of the resources. Neuromorphic chips are pushing 10x energy efficiency. And miniaturized, domain-specific models are proving far more practical at the edge than scaled-up giants.
In Europe, some of the most interesting work is already well underway. Labs like Multiverse Computing and the Hasso Plattner Institute are not chasing parameter count they’re building architectures that are smaller, smarter, and purpose-built for constrained environments.
I’ve been following this shift closely and writing about the growing gap between what’s scientifically promising and what’s commercially overhyped. If you think the LLM race is peaking — or just stalling I’d love to hear your perspective.
Let’s talk.