BCP (Business Continuity Plan) is a hot topic. Technology surrounding BCP or DR (Disaster Recovery) is becoming more mature and reasonable especially because of cloud services. Though the down side of ordinary style BCP is that you must put your backup data to separated DR site and it leads to big overhead. In this article, I'll introduce BCP in next generation using IzumoFS.
Usually when we talk about BCP we come up things like tape backup, cloud backup, backup storage or dat replication using storage functionality. The final choise will depend on cost, RPO (Recovery Point Objective), RTO (Recovery Time Objective) and security policy. The point is it all comes down to idea that “We need to have backup in DR site”.
When we think about it, the data or infrastructure at DR site is nothing but waste unless we face disaster. To make this waste smaller many products came up with wonderful ideas such as many to one data backup or in-line deduplication.
These solutions has long history and it's very stabled and can be trusted but they all share inescapable down side because of its architecture.
All users have to access to main site. DR site can only be useful at the time of disaster. Asynchronous backup or replication will increase RPO and RTO. Restore operation and backup data management is a mess.
IzumoFS is a distributed storage software and it can place nodes in separated location and manage it as a cluster. This means that way we handle BCP will be massively different. Below is how it will look like using IzumoFS to create BCP structure.
By this design user will be completely free from what they had to compromise by ordinary BCP strategy.
By placing node close to actual location where users work the latency between user and storage can become truly minimum. With ordinary way, user has to access to main site, will always get negative impact from data's “Gravity”.
Cool feature that IzumoFS offers is caching. By its nature of distributed storage sometime even user access to his/her closest node requested data might not exist in that machine (it depends on numbers of node and redundancy configuration). In that case the data will be retrieved from another node that might be at different location. But once that data is retrieved, it will be cached on the node where user used to access.
So as time goes by, required data will automatically balanced to stay in appropriate node which makes user experience quite nice.
With IzumoFS there are no difference between node at main site and node at DR site. By placing more backup node it will be used to make IzumoFS even more better. DR site is not a waste anymore.
IzumoFS is designed to be pure P2P. Every node is equal so at the time of disaster simply user can access to the node at some different location where not in disaster.
How many people in your company fully understand the data restore procedure? To be honest it's very tough to know about such procedure even in huge enterprise company. Especially when you have special storage designed just for backup you need to have engineer who has experience in that storage system.
On the other hand IzumoFS doesn't have any special procedure for restore. If any site get crushed by disaster and you need to restore data, just add or replace node with new one as you always do in daily operation.
Also Copy-On-Write snapshot couldn't be used for DR data backup because if main data is lost, the copied data also gets lost. IzumoFS never loose main data, because every data is equal, which means snapshot can be very effective option of backing data up.
Snapshot in IzumoFS can even be version controlled. It offers very intuitive way to manage your backup data.
BCP is currently considered as "Next Step” of main storage. IzumoFS can create the “Next Step” at minimum required structure. If you need more protection or redundancy we offer cloud backup as well.
Maybe some of you might wonder what goes on with write policy and data consistency. IzumoFS is capable of configuring those settings. I'll talk about it in another post. Stay tuned.