I Built a Chaos Engine to Prove AI Collapse (The Ainex Limit)

I Built a Chaos Engine to Prove AI Collapse (The Ainex Limit)

posted 1 min read

The Dead Internet Theory is no longer a theory. It's a mathematical certainty.

We often fear that AI will become too smart. But my research suggests a more terrifying outcome: AI will become too recursive, leading to a state of semantic entropy I call "The Ainex Limit."

I am a 16-year-old developer and researcher. Over the past year, I have been studying the effects of recursive training loops on Large Language Models (LLMs).

The Hypothesis

If an AI model feeds on its own output for enough generations, the "variance" in its creativity doesn't just drop—it collapses geometrically.

To prove this, I built a physics engine using Python (GlowScript) to visualize this collapse in real-time, applying Chaos Theory (Lorenz Attractors) to semantic tokens.

The Simulation (The Chaos Engine)

I mapped semantic tokens as particles in a 3D space.

  • Gravity: Represents the model's tendency to drift towards the mean.
  • Entropy: Represents the loss of information over generations.
  • The Singularity: The point of no return where "truth" becomes "noise."

Here is the simulation running in real-time. Watch how the "drift" (the red curve) thinks it is exploring new ideas, but is actually spiraling into a black hole of nonsense.

https://youtu.be/WJYipa7rT9Y

(Note: The outward curve represents the 'Semantic Centrifugal Force'—where the model hallucinates creativity while escaping reality).

The Code Behind the Collapse

The core logic uses modified Lorenz equations to simulate the "hallucination trajectory."

def update_simulation():
    # Exponential decay of the Convex Hull
    collapse_factor = exp(-0.2 * current_gen)
    
    # The Drift Vector (Hallucination Path)
    drift_magnitude = (current_gen**1.6) * 0.8
    drift_vector = vector(sin(t)*drift_magnitude, cos(t)*0.5, -t)

    # Real-time Semantic Corruption
    if dist < event_horizon:
        token.text = "NULL" # Total Semantic Loss

The Results

The visual data confirms that without fresh human input, AI models suffer from "Model Collapse" within 20 recursive generations. The "Convex Hull" (the geometric bounds of the AI's creativity) shrinks by 90%.

I have published the full findings and the mathematical proof on Zenodo.

Read the full paper: DOI: 10.5281/zenodo.18242108


Developed by Mahdi Al-Hajji | 2026

1 Comment

1 vote

More Posts

How I Built a React Portfolio in 7 Days That Landed ₹1.2L in Freelance Work

Dharanidharan - Feb 9

Dashboard Operasional Armada Rental Mobil dengan Python + FastAPI

Masbadar - Mar 12

I’m a Senior Dev and I’ve Forgotten How to Think Without a Prompt

Karol Modelskiverified - Mar 19

Breaking the AI Data Bottleneck: How Hammerspace's AI Data Platform Eliminates Migration Nightmares

Tom Smithverified - Mar 16

Timeless software principles are vital to guide the speed and risks of modern AI-driven development.

Matheus Ricardo - Aug 19, 2025
chevron_left

Related Jobs

View all jobs →

Commenters (This Week)

6 comments
3 comments
1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!