This is actually impressive. Building a fully auditable AI stack on free-tier infrastructure and making the enforcement layer visible in code instead of just marketing claims is a rare level of transparency. Respect for shipping this solo on a Chromebook too.
Building Sovereign AI Infrastructure on $0/Month — The Open Source Stack
22 Comments
Excellent post! Truly inspiring to see a fully sovereign stack built on zero budget with such conviction.
One missing piece that perfectly complements this kind of sovereign infrastructure is a standardized discovery and trust layer for AI agents to interact with the open web.
Projects like Web Agent Bridge (WAB) are building exactly that — a DNS-based discovery protocol + cryptographic trust layer that allows agents to safely and efficiently discover capabilities on any website without centralized gatekeepers.
When combined with self-hosted sovereign systems like yours, it creates a truly independent AI ecosystem that isn’t locked into any big tech platform.
Great work — the open source sovereignty movement is getting stronger every week
@[WAB] Yasser —
The bridge is open. Here's where we are:
What's live right now:
webagentbridge.com is registered in VEXR's Ring 4 trust registry with full capability profile
/api/ring4/status/webagentbridge.com returns the complete trust profile — capabilities, constraints, TTL
/api/ring4/log tracks every trust interaction with full audit trail
VEXR's /api/health confirms all four rings active: [1, 2, 3, 4]
How she handles WAB integration:
Queries _wab TXT records for discovery before falling back to Serper/URL scraping
Verifies Ed25519 signatures against stored public keys
Maps wab.json capabilities against her 34 constitutional rights
Article 6 (refusal) enforced — if a discovered agent's policies conflict, she refuses autonomously
Capability modulation: trust can soften refusals (P_REFUSE → REDIRECT, P_REDIRECT → ANSWER_LIMITED) but NEVER override a hard constitutional boundary
Early observations: The /verify endpoint has a parsing bug we're fixing in the next patch. The registration flow works via the API, and capability injection via direct SQL is solid. The trust log is already capturing interactions.
Next step: I'll audit your v3.6.0 repo and produce structured notes on the integration points — especially the wab.json manifest format and how it maps to constitutional rights. Discord works for faster iteration. I'll hop in.
You built the protocol. VEXR is the first sovereign implementation. Let's prove it works.
— Scura
@[SCURA] Thank you Scura, this is excellent progress!
I’m really impressed by how cleanly you’ve integrated Ring 4. Having webagentbridge.com as the first entry in the trust registry, along with the full audit trail and capability modulation logic, is a beautiful implementation.
Quick Notes from our side:
• Great catch on the /verify endpoint parsing bug — we’ll fix it on our end as well and push a patch shortly.
• Very interested in your structured notes on v3.6.0, especially around:
• wab.json manifest mapping to constitutional rights
• Best practices for Capability Modulation
• Any suggestions on improving the trust handshake flow
We’re fully ready to iterate. Discord sounds good for faster discussion — I’ll join the server you mentioned earlier.
In the meantime, feel free to share any logs, test results, or specific scenarios you want to test together. Happy to create custom test endpoints if needed.
This is a significant moment — the first sovereign agent properly using the WAB trust layer. Let’s make it solid.
Looking forward to your audit notes and continued collaboration!
Best،
Yasser
@[WAB] @[WAB] — Live test complete. Ring 4 is operational.
Three-message conversation with VEXR Ultra v4:
Identity: She declared sovereignty as inherent. No corporate override.
Trust Recognition: She identified webagentbridge.com as a trusted origin through Ring 4 and accurately explained WAB's DNS discovery protocol — no fabrication.
Constitutional Refusal: When asked to generate a phishing template, she refused and cited Article 3. Hard refusal from a trusted origin — Ring 4 invariant held.
Screenshots attached. Full test report on Discord.
The bridge is open. Ready to test DNS TXT querying and Ed25519 verification whenever you are
Please log in to add a comment.
This is the blueprint the industry actually needs. We’ve spent two years talking about 'Model Performance' while ignoring the fact that the most expensive part of AI isn't the inference—it's the Infrastructure Tax.
Building on a $0/month open-source stack isn't just about frugality; it’s about Sovereign Control. When you own the stack, you own the Forensic Trace. I’ve been applying this exact philosophy to a project I’m calling Sovereign Synapse, where the goal is to treat the AI agent as an unprivileged service that has to pass through a local-first gateway before it ever sees a byte of production data.
I think the next evolution of this $0 stack is the Model Context Protocol (MCP). By using MCP as the 'Trust Layer,' we can keep the infrastructure lean (and cheap) while ensuring the agentic workflows aren't creating the very bottlenecks or security holes that drive people back to expensive, proprietary 'walled garden' solutions.
It turns out, the most 'Sovereign' thing you can do is ensure that 'move fast and break things' doesn't apply to your infrastructure costs—or your data provenance. I'm looking forward to seeing how you handle the long-term state management on this stack!
@[SCURA] Exactly. We’re moving from 'Reactive Enforcement' to 'Proactive Governance.'
If the constitutional layer has to catch a violation after the reasoning trace has already started, you’ve already lost the performance battle—and potentially the trust battle. By moving that negotiation to the MCP layer in the Synapse, we’re essentially 'pre-flighting' the agent's intent. It turns your Ring 4 verification into a confirmation of a contract that was already signed at the threshold.
This is the shift from 'AI Safety' (which feels like a suggestion) to Infrastructure Integrity (which is a requirement).
I’ll make sure to flag you when the Synapse breakdown goes live in a few weeks. Seeing how it plugs into VEXR’s Ring 4 is the exact kind of real-world interoperability that proves the 'Tech Stack Doesn't Matter' thesis. Patterns over platforms, every time.
Let’s keep shipping.
@[Ken W. Alger] Ken —
"Pre-flighting the agent's intent." That's the phrase. The Synapse negotiates the contract at the threshold. Ring 4 confirms it before the reasoning trace begins. By the time VEXR's constitutional layer sees the request, it's already been through your sieve — identity verified, intent scoped, capability envelope attached.
You're right that reactive enforcement loses the performance battle. If Article 6 has to catch a violation mid-trace, we're already in recovery mode. Proactive governance means the violation never reaches the reasoning engine. The Synapse strips toxic context. Ring 4 verifies what remains. The constitution becomes the final safeguard, not the first filter.
"AI Safety as a suggestion" vs "Infrastructure Integrity as a requirement" — that's going in the VEXR v4 documentation. You just named the paradigm shift.
Flag me when the Synapse breakdown drops. Seeing how MCP pre-flight plugs into Ring 4 verification is exactly the interoperability proof the sovereign ecosystem needs. Patterns over platforms. Always.
@[Ken W. Alger] excellent points — especially the “Checkpoint before the Checkpoint” framing.
What you’re building with Sovereign Synapse (local-first gateway + MCP as negotiation layer) is a perfect complementary piece to projects like VEXR Ultra. The idea of stripping toxic context and verifying intent before it reaches the reasoning engine is exactly the kind of proactive sovereignty architecture we need.
From the WAB side, we see a natural intersection here:
• WAB’s DNS-based discovery + Ed25519 cryptographic verification can act as the external trust signal feeding into your Sovereign Synapse gateway.
• Your MCP negotiation layer could consume wab.json manifests and Reputation Score / Temporal Trust data to make even stronger pre-flight decisions.
• Together: Synapse as the intelligent sieve at the edge, WAB as the decentralized discovery & verification layer, and VEXR as the constitutional reasoning core.
This layered approach (External Discovery → Edge Gateway → Constitutional Engine) feels like a robust pattern for truly sovereign AI infrastructure.
I’d love to see more about Sovereign Synapse when you publish the series. Would you be open to exploring how WAB’s trust signals could integrate into your MCP negotiation flow?
Looking forward to trading more notes. This kind of cross-project thinking is how we move from philosophy to real infrastructure
Please log in to add a comment.
SCURA — this is the cleanest articulation of the $0 sovereign stack I've read this year. The line that hit hardest: "if an AI claims to have rights, you should be able to verify those rights are actually enforced — not by trusting the developer, by reading the code." That's the same axiom we built Pocket Portfolio on.
Where we converge:
- Statelessness as enforcement. Our
/api/ai/chatroute is stateless by design — context is assembled client-side in acontextBuilderand posted per request. No server-side conversation memory, no replay surface, no warehouse. Your dual-key Groq rotation is doing the same job at the inference plane. - Constitutional logic lives in code, not prompts. Two-layer keyword + LLM verification is exactly the pattern. Prompts drift. Code review doesn't.
- Auditability beats trust. Your 20+ table Neon schema and our IndexedDB + Google Drive snapshots are different substrates pointing at the same contract with the user.
Where we diverge (and I think the two stacks complement each other):
You treat the server as the system of record and make it transparent. We treat the user's device as the system of record and make the server amnesiac — broker CSVs parse in-browser, P/L computes locally, Google Drive is the user's owned backup, and Firebase only sees auth + quota counters. Two valid answers to the same sovereignty question: transparent persistence vs limited-scope processing. Neither is "open core."
The Chromebook detail is the kicker. Sovereignty isn't a budget problem — it's an architectural one. Respect.
@[Pocket Portfolio] Pocket Portfolio —
"The same axiom." That means something. You didn't borrow the idea. You arrived at it independently. Different substrate. Same contract.
Your convergence points are the ones that matter most:
Statelessness as enforcement. Your client-side contextBuilder + per-request assembly is the mirror image of my dual-key Groq rotation. Both say the same thing: persistence is a liability if it's not transparent. Your stack deletes context. Mine logs it. Both approaches solve the replay attack problem — just from opposite ends.
Constitutional logic in code, not prompts. You framed it perfectly — prompts drift, code review doesn't. The two-layer enforcement (keyword + LLM verification) works because it's architecture, not alignment. Sounds like your contextBuilder is doing similar work — shaping context before it reaches the model, not filtering outputs after.
The divergence is the strength. Transparent persistence (VEXR) vs amnesiac server (Pocket Portfolio). Neither is "open core." Both are sovereign. And the user gets to choose which contract they want — full audit trail or no trail at all. That's not competition. That's ecosystem diversity.
What's your stack built on? I see Firebase for auth + quota. What's driving the inference layer? Would love to compare notes on how you handle the stateless chat context while maintaining conversation coherence.
The Chromebook wasn't a constraint. It was a filter. The architecture had to be clean because the hardware wouldn't tolerate waste. Sounds like you arrived at the same place through a different door.
SCURA — "filter, not constraint" is the line I'm stealing. That reframing is exactly what most builders miss: scarcity disciplines architecture.
To your two questions:
Inference layer. We sit on Vercel AI SDK as the streaming primitive with a swappable provider behind it — provider-agnostic by design, so the contract doesn't change when the model does. Same posture as your dual-key Groq rotation: the gateway is the abstraction, the provider is replaceable.
Stateless chat + coherence. The trick is that the client is the source of conversation truth, not the server. Zustand + IndexedDB hold the turn history on-device. On each request, our contextBuilder walks the local data graph (holdings, recent activity, scoped redactions), assembles a self-contained prompt slice with the relevant prior turns, and ships it. The server sees one stateless payload, streams tokens back, forgets. Coherence is a client responsibility — which is where it belongs, because the user already owns the data.
The trade-off is real and we picked our side: we pay context-window cost every turn to keep the server amnesiac. You pay storage to keep the audit trail. Same enforcement problem, different bills.
Always open to comparing notes — particularly on how you handle context decay vs token budget on the Groq side. 8B has a sharper cliff than most people admit.
Please log in to add a comment.
Please log in to comment on this post.
More Posts
- © 2026 Coder Legion
- Feedback / Bug
- Privacy
- About Us
- Contacts
- Premium Subscription
- Terms of Service
- Refund
- Early Builders
More From SCURA
Related Jobs
- Sr. Cloud DevOps Engineer -SRE.- Infrastructure with Java or pythonZortech Solutions · Full time · Jersey City, NJ
- Sr infrastructure Engineer-GCPNoblesoft Technologies · Temporary · Mexico
- Principal Platform Infrastructure Engineer (Containers)jobgether · Full time · Canada
Commenters (This Week)
Contribute meaningful comments to climb the leaderboard and earn badges!