Zettalane combines ZFS with object storage to deliver 8.14 GB/s throughput at 70% lower cost than AWS EFS. Terraform deployment included.
The Problem with Cloud File Storage
Cloud file storage costs don't make sense. AWS EFS costs $0.30 per GB per month. That's $360,000 annually for 100 TB. Block storage (EBS gp3) costs $0.08 per GB per month but lacks the shared file access that applications require.
The performance issues run deeper than pricing. Most managed file services throttle per-client throughput even when aggregate bandwidth looks good. You also can't access advertised performance until you reach 100 TB of capacity.
What Zettalane Built
Supramani (Sam) Sammandam presented Zettalane's approach at the 66th IT Press Tour in January 2026. The company offers two products:
MayaNAS handles high-throughput file storage using a hybrid ZFS architecture. Metadata resides on NVMe SSDs, while bulk data is written directly to object storage (S3, GCS, Azure Blob). The system delivers 4 GB/s throughput per node with active-active HA.
MayaScale provides ultra-low-latency block storage using local NVMe SSDs via NVMe-over-Fabrics. It delivers 2.3 million IOPS with 129 microsecond latency on GCP.
The Technical Architecture
MayaNAS uses ZFS special VDEVs to separate metadata from data blocks. Files smaller than 128KB (config files, job parameters) stay on NVMe for low-latency access. Large sequential files go straight to object storage.
This is significant for workloads like AI training jobs. The job scheduler reads small config files from NVMe storage. When the job runs, it streams large training datasets directly from object storage at full bandwidth.
Traditional cloud file systems typically cache everything via NVMe or stage data before moving it to object storage. That approach wastes the object storage backend's 200 Gbps bandwidth capability.
Zettalane wrote a Go program called objbacker.io that acts as a native ZFS VDEV for object storage. It uses vendor SDKs (AWS, GCP, Azure) to handle the object storage API calls. When ZFS gets a 1MB write from an NFS client, objbacker.io sends that full 1MB block directly to object storage. Smaller blocks go to the local NVMe VDEV.
In testing on GCP with two n2-standard-48 instances (48 vCPU, 192GB RAM each), MayaNAS delivered:
- 8.14 GB/s concurrent read throughput (active-active HA, both nodes)
- 6.2 GB/s concurrent write throughput (direct I/O, 1MB blocks)
The test used 10 concurrent FIO jobs, 23GB files per job (230GB total to defeat cache), 300-second sustained runs, and NFS v4 with nconnect=16.
MayaScale performance on GCP using older Y2 family instances:
- 2.3 million IOPS (peak at QD64)
- 129 microseconds latency (at QD1)
- 19x faster than GCP PD Extreme
The Y2 instances cost less than newer Titanium SSD instances while delivering similar performance. Zettalane's Terraform modules automatically select cost-effective instance types.
Deployment
Both products can be deployed via Terraform in about two minutes. The Terraform module handles:
- Instance provisioning with optimal network settings
- VIP configuration (ENI on AWS, IP alias on GCP, custom routes on Azure)
- Object storage bucket creation and authentication
- ZFS pool creation and NFS export configuration
- Active-active HA setup with automatic failover
You run terraform apply and get a working NFS share with the mount command in the output.
Multi-Cloud Support
The same architecture runs on AWS, GCP, and Azure. Zettalane abstracts the networking differences:
| Component | AWS | Azure | Google Cloud |
| Instance | c5.xlarge | D4s_v4 | n2-standard-4 |
| Block Storage | EBS gp3 | Premium SSD | pd-ssd |
| Object Store | S3 | Blob Storage | GCS |
| VIP Migration | ENI attach | LB health probe + Custom route | IP alias |
| Deployment | CloudFormation | ARM Template | Terraform |
The Terraform modules use the same ZFS configuration and cluster setup across all three clouds.
MayaScale Architecture
MayaScale pools local NVMe SSDs across instances using either Linux MD RAID-1 or ZFS zpool mirroring. Both modes provide active-active HA with server-side synchronous replication.
The server-side approach matters. Competing solutions use client-side replication, where the client writes to both storage nodes and waits for ACKs from both. That doubles network traffic and puts mirror management logic on the client.
MayaScale clients write to the storage node once. The storage node handles mirroring internally. The client sees a single write and a single ACK. This cuts network traffic in half and eliminates client overhead.
The system supports three protocols:
- NVMe-over-TCP (default)
- NVMe-over-RDMA (validated on cloud)
- iSCSI
Zettalane validated RDMA on public cloud infrastructure with 30-40 microsecond latency using InfiniBand. They expect to announce public RDMA support soon.
FSX Mode
MayaScale includes an FSX mode that uses ZFS zpool mirroring instead of MD RAID-1. This provides OpenZFS-based NFS file storage for clouds that don't offer AWS FSx equivalents (such as GCP and Azure).
ZFS mirroring copies only written data blocks, not entire disks. This makes it more efficient than MD RAID-1 for file workloads.
PostgreSQL Performance
MayaScale delivered 96,000 TPS on PostgreSQL pgbench using server-side RAID-1 HA. That compares to 60,000-70,000 TPS for AWS Aurora.
Pricing Model
Zettalane charges per vCPU-hour with transparent pricing. No per-GB charges, no per-IOPS charges. Cloud infrastructure costs are separate and billed by the cloud provider.
The software runs in your private VPC with no call-home requirement. Zettalane doesn't monitor your deployment or collect telemetry.
What's Next
The roadmap includes:
- Kubernetes CSI driver for container persistent volumes
- Cloud-native Lustre with MayaNAS as MDS/OSS backend
- pNFS FlexFiles support for scale-out NFS
Available Now
Both products are live on AWS, GCP, and Azure marketplaces. You can deploy via the marketplace (CloudFormation or ARM templates) or directly with Terraform modules.
Zettalane targets developers and small-to-medium businesses that need cloud storage but can't justify the minimum 100TB commitments required by most enterprise solutions. You can start with terabyte-scale deployments and scale up as needed.
The company was founded in 2018 and has a team of about 10 people. Sam previously created MAYASTOR, an early software-defined storage platform for iSCSI/FC SAN in 2007.