Why Cloud Storage Needs to Be Reinvented?

Article by: Nelson Nahum, co-founder and CTO, Zadara Storage

When people talk about cloud storage, they usually refer to storage that is accessible from outside the cloud, to stores files, pictures, and backups. Sometimes referred as object storage, a good example of this type of cloud storage is Amazon S3. In this article, I will be focusing on a different type of cloud storage – the storage needed by cloud servers to run their applications. This storage is accessed from inside the cloud and used by the cloud servers to mount their filesystems and databases. In Amazon AWS’s terminology, this is what is called EBS. A MySQL database or MS NTFS are good examples of a database and filesystem that will use a block storage device and not object storage.

Cloud providers are implementing different techniques and products to supply their block storage needs, some have different advantages, but overall the techniques fall short in meeting even the minimal requirement of enabling enterprise-class applications to use them.

The list of attributes that are important for enterprise customers are:

1 – Performance

2 – Predictability of performance (do my performance vary when other customers use the cloud?)

3 – High availability (what happens if the server or storage fails – can I access my data?)

4 – Control (can I control the level of protection, caching used, or type of drives used?)

5 – Security (can people outside my organization read my data?)

6 – Dynamic expansion / elasticity (is it easy to add more storage to my cloud servers?)

7 – Portability (can I use the storage in the cloud without rewriting my applications)?

8 – Features (do I have storage features such as remote mirroring, snapshots, and thin provisioning that my storage in the datacenter has?)

The current storage products were designed to be single tenant, that is, a single customer using it. When used in a multitenant environment like the cloud they encounter many limitations. The same storage box that has all the right capabilities in the datacenter, if used in the cloud, loses most of them.

A good example is putting SAN storage or a NAS scaleout system as the back end storage of a public cloud. Due to the lack of multitenancy features, the management of the SAN/NAS box cannot be given to any user, but must be done by the cloud provider. The drives, CPU and memory are shared among many customers. So the performance becomes inconsistent and hard to predict. Another limitation is the lack of “shared storage” in the cloud. Any clustered application that requires shared storage like Oracle RAC, Linux HA or MS Failover Cluster cannot run in the cloud. Finally, SAN arrays or NAS storage can be used for Disaster Recovery in the datacenter by providing remote mirroring capabilities. But DR strategies require the user to control the storage so they can effectively do the DR tests and decide how to failover between sites. Well, the same storage product, when used in the cloud, lacks all these capabilities as there is no possibility to give the management of the storage to every customer such that they can test their DR. Even more than that, there is no way to consistently mirror or snap multiple volumes of a single user at the same time, when they are mounted to different cloud servers.

Because of all these reasons, and many others, storage products need to be reinvented to be multitenant in order to be used in the cloud and provide the same functionality present in the enterprise storage in the datacenter. Multitenant means that multiple users can use the system, but they don’t loose the capabilities they had in the single tenant environment. It means that a user cannot affects performance of the others and also means that each one of the users can have full control and security of his own storage, like they have today with classical SAN/NAS storage arrays in the datacenter.
This is the gap we saw in cloud storage, when we founded Zadara Storage over a year ago, and the solution was to create a new storage architecture, reinvented for the cloud.

Nelson Nahum, co-founder and CTO Zadara Storage

Nelson has over 20 years of experience in the storage industry in multiple storage software development positions. He is known for creating innovative products and successfully bringing them to the market. Prior to co-founding Zadara Storage, he was a Fellow and Vice President of Software Engineering at LSI Corporation, where he was responsible for an engineering team of over 250 people. Previously he was CTO and co-founder of StoreAge Networking Technologies, which was acquired by LSI in 2006. At StoreAge, he invented the out of band storage virtualization system, building and leading the engineering to a successful product that was adopted by HP and lead to the acquisition of StoreAge by LSI. Nelson holds multiple patents related to storage systems. Nelson has a B.Sc EE from the Technion, the Israeli Institute of Technology.

Leave a Reply

Your email address will not be published. Required fields are marked *