
Regatta Storage recently launched!
Founded by Hunter Leath
He has spent the past 9 years building cloud storage products on the frontier of what’s possible – leading both Amazon’s Elastic File System product and Netflix’s cloud storage team. During this time, he worked with thousands of application developers, cloud teams, and software providers to deeply understand how they use storage.
Hunter found that AI and analytics teams love storing their data in S3 because it’s low-cost and scalable, but their applications can only use this data once it’s in a local file system. As a result, these teams are constantly moving data back and forth from their local instance file systems to S3. For larger data sets, applications can wait for hours to download and decompress data before starting. Unlike S3, local file systems have a limited amount of capacity that you can’t easily change after creation, they don’t support sharing data, and they’re 10x more expensive.
Regatta transforms your existing S3 buckets into a new kind of infinite, local file system. With Regatta, your applications can instantly use petabyte-scale data sets in S3 without waiting for manual data download. Regatta automatically scales with your data, and you only pay for the data you’re actively using – 90% cheaper than using EBS. With Regatta’s shared, high-speed cache, all your instances and containers can take advantage of 30x faster, consistent reads and writes. When you’re done editing data, it automatically flows back to S3 within a few minutes, so you can easily share results with teammates. Regatta has full POSIX compatibility (including renames, appends, and locking), so it’s already compatible with your applications – no code changes needed. Teams move faster when they use Regatta because they don’t have to build custom data movement infrastructure.
Check out how easy Regatta is to set up and use.
They are really excited about how Regatta will help teams simplify their data management. They are currently working with: AI teams who want to start model inference without waiting for downloads to complete, researchers who want to experiment on models and create new versions without running out of disk capacity, analytics teams who want to speed up their batch pipelines, and cloud teams who want lower-cost local storage without worrying about utilization.

Let's build the future of the cloud together.