As we consider which cloud and data strategies are best for an enterprise, we know that one size doesn’t fit all. Indeed, there are a number of cloud adoption challenges – not least, with backup and retrieval.
That’s because backup and archiving applications are the number one consumer of storage capacities (IDC 2015 Storage Manager Survey). It is not uncommon for backup data to be 100 times more than the primary data. This is mostly due to long-term retention of backups – with 7 years being common, and lack of de-duplication capability on tapes.
Problems with Tape
There are other inherent problems with tape. Tape is inefficient, unreliable and manpower-intensive. Something as innocent as moisture can spoil tapes. Finding the right tapes could be an issue, searching for data stored on tapes is tricky. So, would you rather store your songs on audio tapes or an mp3 device or just stream it from Spotify? Very few will choose audio tape even though it is the cheapest option. It’s no different for enterprises. The very same reasons why one would choose an mp3 device or Spotify over tapes, are the reasons why enterprises move from tape to disk-based solutions or cloud to store their backup data.
Cloud over Tape
Many enterprises are evaluating cloud to replace their tapes. This means that backup applications will run on-premise – close to the primary data, but long-term backup copies would be stored on cloud instead of on tape. Most common backup applications like Veritas NetBackup, CommVault, and Veeam have integrations with major cloud providers and easily can put long term copies on cloud.
However, this approach of having backup applications put data on cloud, has four risks one needs to be aware of:
- Overburdening the backup application
Enterprises buy backup applications to complete backups within a given period of time – called the backup window, so that your primary data is protected. That’s their main purpose. But if you need to bring your data back on-premise, perhaps due to a change in regulations or a merger or acquisition, the backup application will be tasked to move the data from cloud to on-premise. This will put an additional burden on the backup application and this results in the backup application not being able to perform the main job it is supposed to do – which is to finish backups fast enough, in time. Many customers face this problem.
- Cloud vendor lock-in
The second problem they face, is cloud vendor lock-in. This is a problem if you want to move data to from one cloud to another. You can ask a backup application to move your data from Cloud A to Cloud B, but the amount of planning you will have to do and the performance of the backup application will affect the move of the data, and it won’t be a small problem to solve. You may have to install backup applications on additional core (to give it the hardware boost it needs) and buy more compute servers for a transition period to move the data between clouds. For some customers I worked with, this duration would take 6 – 9 months.
The way most backup applications move their data, if you’re moving it from Cloud A to Cloud B, is they recall the data from Cloud A and they will put it on a staging area, which is typically somewhere on-premise. This is because the backup application is running on-premise. Backup applications will require a temporary staging facility, next to the application.
Then the data is moved from the staging area to Cloud B. What this meant for customers I worked with, was that they had to buy additional storage capacity, just to move their data around. The cloud providers do not provide the temporary storage facility from their side.
- Data Security
When you move your data to the cloud, you have to make sure that you encrypt the data before it gets moved to the cloud. Many customers told me that their cloud provider encrypts the data. But how many decryption keys do they have and who else had access to it?
We recommend that before data is moved to the cloud, you encrypt it on-premise: then you have your own encryption key.
- Network Cost
In many countries, one of the biggest problems in moving cloud is the network cost of moving all that data, which can be prohibitive for some customers. Which is why compression is important. So, when I talk to customers I ask them “Are you able to compress your data before it gets moved to the cloud so that you can adopt cloud at a smaller network bite?”
Cloud Broker Software
A cloud broking capability on an object storage can address the four problems highlighted above, together, while moving and/or copying your data for different applications, to different clouds while the backup application gets on with its job of completing backups efficiently.
That’s what Hitachi Content Platform (HCP) does, by encrypting and directing data to your choice of public cloud services; with support for Amazon Web Services (AWS), Microsoft® Azure®, Google Cloud, Alibaba Cloud and other S3-enabled services.
The cloud data management has been offloaded in the background to object storage without the backup application needing to be involved with it. The cloud broker can pull data back from a cloud without impacting the performance of the backup application. More importantly, it is a pass-through and so it doesn’t need any temporary space or a staging area on-premise. The data is then encrypted and compressed, giving you a smaller network bite.
It’s a bit like a car valet service: they take care of the cars and deliver them when and where they’re needed, while you have a nice dinner in the restaurant. This is why cloud broker software can be like your new best friend.
By Pratyush Khare, Chief Technology Officer APAC, Hitachi Vantara