The idea of models eating their own output and slowly drifting off reality is genuinely unsettling. Wild work for someone so young, curious how this would behave at much larger scales or with multimodal data.
IF AI Eating Its Own Brain - I SOLVED this!!
1 Comment
@[Marco Marelli] Thanks, Marco! It really is like a "digital prion disease"—once it starts, it's hard to stop.Regarding your question on scale: I actually stress-tested this in the paper (benchmarked against Llama-3 70B and Qwen-32B). Surprisingly, scale doesn't solve the collapse; it only delays it. My findings show that the "Rigidity Penalty" ($\lambda$) remains positive regardless of parameter count, meaning even 100B+ models eventually hit the "Ainex Limit" if the loop is closed long enough.Multimodal is the next frontier! I suspect the topological collapse in image/video latent spaces will look similar to GAN mode collapse but with a recursive "feedback loop" twist.If you're interested in the math behind the scalar invariance, the full proof is under review at JAIR (and the preprint is up on Zenodo). Appreciate the kind words!
Please log in to add a comment.
Please log in to comment on this post.
More Posts
- © 2026 Coder Legion
- Feedback / Bug
- Privacy
- About Us
- Contacts
- Premium Subscription
- Terms of Service
- Refund
- Early Builders
More From Mahdi Hassan Al-Hajji
Related Jobs
- Senior Full Stack Developer and Product Owner: Real-Time Intelligent Communication SystemsEntratus · Full time · Orlando, FL
- Frontend Web Engineer - Americas (Own the Landing Page)SupportFinity™ · Full time · San Francisco, CA
- Physical Therapist- Home Health Visits Senior LivingBAYADA Home Health Care · Full time · Chambersburg, PA
Commenters (This Week)
Contribute meaningful comments to climb the leaderboard and earn badges!