We've been building with AI agents for a while now. And at some point, something broke.
Not the code. Not the LLM. The accountability.
Two agents interacted. Something went wrong downstream. And when we tried to understand what happened — who decided what, in which order, based on which input — we hit a wall.
The logs were there. The traces were there. But none of it was provable. Anyone could have modified those logs. The traces weren't signed. And when you're in a compliance-sensitive context, "trust me, here's the log" isn't good enough.
We looked around. Observability tools are great at showing you what happened. But showing isn't proving.
So we started thinking about what "proof" actually means between agents. And then a second question followed immediately : proof that lasts how long ?
Because here's the thing nobody mentions when they talk about signing and verification — the cryptographic guarantees you're building today may not hold in five years. Quantum computing isn't science fiction anymore. And if your audit trail can be broken retroactively, it wasn't really a proof. It was a delay.
That's why we went further than Ed25519. We added Dilithium3 — a post-quantum signature scheme — so that what gets recorded today remains verifiable even in a post-quantum world. Not because we expect attacks tomorrow. Because an audit trail that expires isn't an audit trail.
We built something around all of that. Co-signed handshakes. Hash-chained memory. Verifiable sessions that don't depend on your infrastructure being intact or trustworthy. And signatures designed to outlast the threat landscape we're walking into.
It's opinionated. It's not for every use case. But it answers the question we couldn't answer before :
"Can you prove what happened between your agents — to someone who wasn't there, on a system they don't control, five years from now?"
We think that question matters more than people realise right now. Especially as agents stop being demos and start touching real decisions.
Has anyone else hit this wall — and approached it differently? Curious how others are thinking about long-term verifiability in multi-agent systems.
References
https://dev.to/piqrypt/-your-ai-agents-are-talking-but-can-you-prove-what-they-said-5a1f
github.com/PiQrypt/piqrypt
AISS spec — Agent Identity and Signature Standard
PCP — Proof of Continuity Protocol
NIST Post-Quantum Cryptography Standards — Dilithium3 (ML-DSA)