AuriStor Tackles Distributed File System Performance With Protocol-Level Improvements
Most developers don't think much about distributed file systems until they hit a wall. You're moving files across continents, dealing with spotty connections, or trying to sync data for thousands of compute nodes. That's when you realize the infrastructure matters.
AuriStor spent the past decade rebuilding the Andrew File System. They forked from OpenAFS in 2012 and focused on what they call "paying off technical debt." The result is AuriStorFS, a commercial product that prioritizes performance and security over backward compatibility.
The company's approach is technical, not trendy. They identified bottlenecks in the RX remote procedure call protocol and systematically addressed them. Senior engineer Simon Wilkinson presented results showing a 10x performance improvement—the time to transfer 10GB of data dropped from 437,500 milliseconds to under 50,000 milliseconds using multiple threads.
Protocol improvements that matter
RX is a UDP-based RPC protocol. It's not new or exciting, but it works. AuriStor's team analyzed every aspect of how packets move through the system and found ways to optimize.
They increased the default window size from 32 packets to 8,192 packets. A larger window means more data in flight at once. On a 1ms round-trip link, this bumps theoretical throughput from 362 Mbit/sec to 92 Gbit/sec.
Window size alone doesn't solve everything. The team implemented RFC-compliant congestion control algorithms, including New Reno and SACK-based loss recovery. They added RACK-TLP for better tail loss detection. These aren't radical ideas—they're proven TCP techniques adapted for RX.
Path MTU discovery now works properly. The system starts with a conservative 1,200-byte MTU and probes upward. If your network supports jumbo frames, AuriStor uses them. This cuts packet overhead significantly.
Real-world performance gains
One government research organization moved its AFS deployment to the cloud and measured the results. Copying a 1GB file from local disk to AFS took 3 minutes and 11 seconds with OpenAFS servers. With AuriStor's 2021 release, it dropped to 1 minute. The latest release with path MTU discovery and proportional rate reduction cut it to 30 seconds.
These improvements scale. Production AuriStor file servers handle over 500,000 simultaneous connections from 40,000 cache managers. One deployment manages 1.7 million volumes, with shutdown times reduced from 30 minutes to 4 seconds.
A large financial institution (operating under NDA) runs 80 cells with 300 servers serving 175,000 clients. They deploy software updates continuously—new releases push out every few minutes. The infrastructure handles 1.5 million volumes across 180 to 200 regional cells globally, distributed across multiple cloud providers.
Security without the theater
The security model is straightforward. AuriStorFS uses GSS-API with Kerberos v5 for authentication and AES-256 for encryption. The YFS-RxGK security class supports combined identity authentication—both user and machine identities are verified together.
This matters for organizations that need to control where data gets accessed. You're not just "you"—you're "you on your phone," "you on your company laptop," "you on your personal laptop." You can grant permissions to a user only when they're on a specific machine.
Volume-level security policies let administrators enforce encryption requirements. Volumes can only move to servers with equal or stronger security policies. File servers can require specific authentication levels before serving data. Maximum access control lists prevent users from accidentally exposing data, even if they set overly permissive permissions on individual files.
The perpetual license model
AuriStor's pricing is unusual. Base cost is $21,000 per year per cell, covering up to 4 servers and 1,000 user or machine IDs. Additional servers cost $1,000 to $2,500 each. There's no charge based on storage capacity or data volume. As Jeffrey Altman, Founder and CEO
put it during the presentation to the 64th IT Press Tour, "We give you the first 100 exabytes for free."
The license is perpetual. Stop paying and you keep using the version you licensed. You just don't get updates or support. Security patches continue for at least two years after each release.
For developers working on large-scale deployments, this model is predictable. Your costs don't explode when you add storage.
Container integration happening now
AuriStor built a CSI driver for Kubernetes that works with Red Hat OpenShift. Their large financial customer is using it to transition 180,000 systems to containers without losing the benefits of AFS.
The challenge: container images for machine learning applications start at 40 gigabytes. You don't need all that data to run the application—only a fraction gets executed. Copying 40 gigabytes to expensive compute instances wastes time and money.
AuriStorFS caches only the regions of files that applications actually access. The CSI driver lets containers mount AFS volumes directly. Applications execute binaries from the AFS namespace instead of packaging them into container images.
One customer has software distribution trees that are hundreds of gigabytes. With updates every few minutes, pushing new container images to 50,000 clients is impossible. AuriStorFS replicates only the incremental changes.
The Linux kernel's built-in AFS client (kafs) supports AuriStorFS protocols. It's distributed in Fedora, Ubuntu, Debian, and RHEL 9.2 and later. Last night, the team successfully booted a Linux system from AFS for the first time using mainline kernel tools.
Technical trade-offs
AuriStorFS doesn't support server-side byte range locks yet. This means it's not suitable for databases or virtual machine images. That's on the roadmap but won't arrive before 2026.
The system works best for read-heavy workloads with occasional writes. Think software distribution, research data, and home directories. It handles files up to 16 exabytes, though real-world deployments use files in the 5TB to 50TB range regularly.
Mixed deployments with OpenAFS work but lose some features. You can run both, but you're limited to the lowest common denominator.
Small team, focused mission
AuriStor has a core team of five developers, the same people who worked on OpenAFS before the fork. They're augmented by about five part-time contractors with specialized skills.
The company is sub-$2 million in annual revenue but profitable. Altman didn't take a salary from 2007 to 2016. The company received SBIR grants from the Department of Energy, sponsored by SLAC and Fermilab.
More than 50% of their engineering effort goes into network performance. As Altman said, "If we can't make the network fast, it doesn't matter what anything else does."
Source code is walled
AuriStor keeps source code private. Customers can license access to repositories, test infrastructure, and participate in development. If AuriStor goes out of business, source licensees gain the right to create public forks.
This is a deliberate choice after years of watching OpenAFS struggle with the free rider problem. Organizations would use the software, but not fund development. The trade-off is commercial sustainability versus open collaboration. AuriStor chose sustainability.
For shops considering distributed file systems, AuriStorFS offers measurable improvements over OpenAFS. The question is whether the licensing model and closed source approach fit your organization. If you need global data access with predictable costs and don't want storage-based pricing, it's worth evaluating.