Six Invariants You Can't Break: Engineering Deniable Encryption in a Browser

Six Invariants You Can't Break: Engineering Deniable Encryption in a Browser

posted Originally published at dev.to 7 min read

Engineering VeraCrypt-Style Hidden Volumes in a Browser: Argon2id, AES-256-GCM, and the Invariants That Actually Matter

Zero-knowledge encryption keeps your notes unreadable. Plausible deniability keeps the existence of your notes unprovable. This is a walkthrough of how we built the second property into a browser-only notepad — what the primitives are, where the sharp edges are, and which invariants you cannot afford to break.

TL;DR

Flowvault is a browser notepad with no accounts. A URL slug plus a passphrase decrypts one notebook. The twist: the same URL can hold up to 64 independent notebooks, each with its own passphrase, all packed into a single fixed-size ciphertext blob. An attacker who forces you to reveal one passphrase cannot prove the others exist. The server stores an opaque blob and has no way to tell either.

This is the VeraCrypt hidden-volume model, translated to a browser where you don't have a filesystem, you don't have a raw disk, and every byte you serve is observable by the server operator. The cryptography is textbook; all the bugs live in the engineering discipline of "never leak structure."

The threat model, stated precisely

Most "end-to-end encrypted" products solve:

The server operator cannot read user content.

That's necessary but not sufficient. We also care about:

  1. Compromised storage. An attacker has the ciphertext plus all server-side metadata. Can they decrypt without the passphrase? Can they tell how many notebooks the blob contains?
  2. Compelled disclosure. You are forced — physically, legally, at a border, under subpoena — to reveal a passphrase. Can you reveal one notebook and keep the others deniable?
  3. Traffic metadata. The server sees read and write requests. Can request shape reveal which notebook you're using?

Property (2) is what people mean by plausible deniability. It's what separates VeraCrypt-style systems from everything else, and it's a threat model that standard ZKE products like Bitwarden Notes, Standard Notes, or ProtectedText do not attempt to solve — not because they're careless, but because solving it constrains every decision downstream.

The primitives

Nothing exotic. The interesting part is how they fit together.

  • KDF: Argon2id, 64 MiB memory, 3 iterations, 1 lane.
  • Cipher: AES-256-GCM with a 128-bit tag, explicitly pinned.
  • Salt: 16 bytes random per blob (not per notebook — more on that below).
  • Nonce: 12 bytes random per notebook per save.
  • Key expansion: HKDF-SHA-256 from the Argon2id output, producing per-slot subkeys with domain-separation labels.
  • Blob: fixed size (128 KiB default), filled with cryptographically random bytes at creation.

A few notes on why:

Argon2id is memory-hard and side-channel resistant, standardised in RFC 9106. 64 MiB at 3 iterations is aggressive enough to hurt offline GPU brute force without breaking mobile Safari. We measured ~800 ms on a mid-range Android, ~300 ms on a modern laptop — the top of what we're willing to ask a user to wait for.

AES-256-GCM is not a stylistic choice. If you use a malleable cipher like raw AES-CBC, an attacker with write access to the blob (which, in our model, includes the server) can flip bits in the ciphertext and you cannot detect it. Unauthenticated encryption in a remote-storage design is a bug, not a trade-off.

HKDF expansion is how we avoid using the Argon2id output directly as an AES key for multiple slots. Argon2id gives us one high-entropy secret per passphrase; HKDF gives us labelled subkeys from it. Domain separation is cheap; getting it wrong leaks everything.

The blob layout

Here is the part that actually implements deniability. The blob is a fixed-size byte array from creation. We never grow or shrink it. Content that's too big is compressed; content that's small leaves the rest of the blob as the random bytes we wrote at init.

┌───────────────────────────────────────────────────────────────┐
│ 16 B  global salt (random, written once at blob creation)     │
├───────────────────────────────────────────────────────────────┤
│  Slot 0  ─  fixed size, AES-GCM(nonce ‖ ciphertext ‖ tag)     │
│  Slot 1  ─  fixed size, AES-GCM(nonce ‖ ciphertext ‖ tag)     │
│  ...                                                          │
│  Slot N  ─  fixed size, AES-GCM(nonce ‖ ciphertext ‖ tag)     │
├───────────────────────────────────────────────────────────────┤
│ Random padding up to total blob size                          │
└───────────────────────────────────────────────────────────────┘

Every slot is the same size. Every byte outside a used slot is random. The ciphertext of a used slot is, by AES-GCM's IND-CPA property, indistinguishable from random to anyone without the key.

That last sentence is the whole trick. An observer with the blob sees a fixed-size array of random-looking bytes and cannot tell whether slot k holds real content or unwritten padding. The count of used slots is hidden behind the cipher's indistinguishability property.

Unlock: finding your slot without revealing slot structure

This is where naive designs leak.

The wrong way: prefix each slot with a hash of the passphrase so the client can find its slot quickly. This destroys deniability — an attacker with the blob can tell you how many non-empty slots exist, because only real slots have a valid hash prefix.

The right way is trial decryption across every slot:

async function unlock(blob: Uint8Array, passphrase: string) {
  const salt = blob.subarray(0, 16);
  const masterKey = await argon2id(passphrase, salt, {
    memMiB: 64, iters: 3, lanes: 1, outLen: 32,
  });

  for (let i = 0; i < MAX_SLOTS; i++) {
    const slot = readSlot(blob, i);
    const slotKey = await hkdf(masterKey, `slot-${i}`, 32);
    const pt = await aesGcmDecrypt(slotKey, slot.nonce, slot.body);
    if (pt !== null) return { index: i, plaintext: pt };
  }
  return null;
}

aesGcmDecrypt returns null on tag-verification failure. With a 128-bit tag, the false-positive probability across 64 slots is ~2⁻¹²². Not a concern.

The client runs Argon2id once, then performs up to 64 symmetric AES-GCM attempts at microseconds each. Total unlock time is dominated entirely by the KDF. And critically: every slot is tried for every passphrase, so an attacker running the same code learns nothing from the number of misses — every passphrase misses N − 1 slots by construction.

The invariants you cannot break

This is where most hidden-volume attempts fail. Listed bluntly:

  1. Fixed blob size, always. If the blob size depends on how many slots are populated, you have leaked slot count. Number-one failure mode. Allocate full size at creation, never resize.
  2. Empty slots pre-filled with random bytes at creation. new Uint8Array(size) gives you zeros. Zeros are a giveaway. Use crypto.getRandomValues before any slot is ever written.
  3. No per-slot metadata outside the ciphertext. No slot-count field, no "in use" bitmap, no plaintext length prefix. Every piece of structure lives inside an encrypted slot or is identical for every slot whether used or not.
  4. Authenticated encryption, no exceptions. AES-GCM with a pinned 128-bit tag. Never roll your own "AES + HMAC later."
  5. Every save rewrites the whole blob. If you only rewrite the slot you changed, you've leaked which slot you changed. Full re-upload on every write is a bandwidth cost you pay for metadata-level confidentiality.
  6. Server stores bytes, not structure. No parsing, no version checks, no "obviously corrupt" rejections server-side. Any semantic check is a side channel.

Break any of these and the deniability claim weakens from "cryptographically indistinguishable" to "probably fine, unless someone looks carefully."

Things that surprised us during implementation

A few that aren't in the design doc:

  • WebCrypto's AES-GCM tag length varies by default. We pin tagLength: 128 explicitly. Don't trust defaults.
  • Argon2id in the browser is WASM, not WebCrypto. Pick an audited library, and test cold-start on mobile Safari — that's the worst case.
  • crypto.getRandomValues has a 65,536-byte per-call limit. For larger blobs, chunk the calls. Silent truncation would be catastrophic.
  • Compression before encryption leaks length. For free-form notes this is mostly fine; for short high-entropy secrets (recovery phrases), either skip compression or pad to a fixed length.
  • Plaintext in memory during edit is unavoidable. Plaintext in a service-worker cache is not. Audit every surface where decrypted state could escape the tab.

When this is the wrong design

Plausible deniability is expensive and most products don't need it. Skip it if:

  • You need multi-user editing. Hidden volumes don't compose with CRDTs or OT in any clean way.
  • You need server-side search. That requires either leaking metadata or building searchable encryption — its own research project.
  • Your users can't tolerate a ~500 ms unlock. Argon2id is the floor; anything cheaper weakens the offline brute-force story.
  • Your threat model is just "the server shouldn't read my notes." Any standard ZKE design solves that. Save yourself the complexity.

If none of those apply, hidden volumes in a browser are tractable. The crypto is boring; the discipline of never leaking structure is where the work is.

Hidden volumes are the hardest thing in Flowvault, but they're not the only cryptographic construction in the repo. Three others are worth knowing about if you're reading the code:

  • Time-locked notes. Identity-based encryption against the drand randomness beacon via tlock. The sender encrypts to a future round number; the decryption key literally does not exist yet, and is only derivable once drand publishes that round. Even the sender cannot decrypt before the target date. (deep dive)
  • Trusted handover. A beneficiary with a separate passphrase can take over a vault if the owner stops checking in for a configurable interval. Implemented as a two-layer envelope: the beneficiary's passphrase never decrypts the vault directly, but unwraps an inner key only after the server-side inactivity condition is met. No account required for either party. (deep dive)
  • Bring Your Own Storage. Same hidden-volume format, same Argon2id + AES-GCM, but the blob lives on the user's disk as a single .flowvault file via the File System Access API. The server never sees the ciphertext. Useful for threat models where even opaque-blob-at-rest on a third-party server is too much. (deep dive)

Each of these is its own post-sized engineering story. The common thread is the same as hidden volumes: you can build strong guarantees in a browser, but every design decision has to be audited against the question "what does this leak to the server, the network, or an observer with the blob?"

Source

Flowvault is MIT-licensed. Frontend, Cloud Functions, and Firestore security rules live in one repo:


About the author — I'm a senior engineer at Flowdesk. Previously 4 years shipping production cryptography at FlowCrypt (OpenPGP for email — iOS + Chrome Extension). Writes about privacy-first web systems on the Flowvault blog.

More Posts

Local-First: The Browser as the Vault

Pocket Portfolioverified - Apr 20

Comparison: Universal Import vs. Plaid/Yodlee

Pocket Portfolioverified - Mar 12

The Interface of Uncertainty: Designing Human-in-the-Loop

Pocket Portfolioverified - Mar 10

How I Built a React Portfolio in 7 Days That Landed ₹1.2L in Freelance Work

Dharanidharan - Feb 9

Meta’s In-App Browser: Convenience or a "Man-in-the-Middle" by Design?

AxelWebEngineer - Apr 8
chevron_left

Related Jobs

View all jobs →

Commenters (This Week)

2 comments
1 comment
1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!