I run wire for a new network drop I usually run extra to start with and then I usually leave a slack loop of say 1-3 FT.
When Sys Admin get sticker shock when they see the prices for enterprise level backup software they are missing some of the higher level stuff.ĭR was meant to be bullet proof first.fast second. Veeam calls their WAN Acceleration or something.Įspecially in the case of OS snapshots most of the actual data at block level is the same from snap to snap, so why bother storing every snap in it's entirety? Same with restoring it.goes both ways. Most of the enterprise level DR / Backup packages have some type of 'Rsynch'ish algorithms on the fly with dedupe along as well. The biggest question I have really is how do others deal with having to transmit TB of data repeatedly. I found these articles which had a few decent suggestions:
You should have a staggered (full/incremental) backup policy of some sort. What type of backup solution are you using? Surely there is de-duplication, and minimization of data that can be done to conserve space at the very least. That is a metric ton of data, and frankly, I have to wonder why most of it is not in some sort of cold storage outside of the infrastructure which is to be backed up. I honestly am not sure what you were getting at in your last few lines here, but realistically we're going to need a bit more information about what your environment looks like, what you have available or have the budget to implement, and what the sensitivities and nuances of this data are (for example, is the data able to be compressed? Does it need to be encrypted at rest? In transit?) to make any solid suggestions.