06-17-2014 06:10 PM
Do any admins out there run their own scripted backups? Redhat Linux and GFS2 (shared file system) are popular grid cluster configurations but some volume backup solutions like Tivoli can get flaky above 7Tb and restores can take days. My assumption is that relatively few files are critical and require backup/restore, so I am proposing a scheduled script based on age, size, location and ownership that copies files to and from a slow, cheap location off primary disk.
Several years ago, I had a system that used old 200Gb backup cartridges that the users liked and was cheap. After filling them up, they kept one cartridge at their desks and the other went offsite. I am trying to design something similar using cheap disk.
06-18-2014 10:19 AM
Look into a program rsync.
To give you a sense of it's capabilities do a google search for it, here is a link:
06-18-2014 11:50 AM
Having a DR Disaster Recovery plan and being able to underpin some data retention policies are often mandatory. The bad thing is I did not see SAS is recognizing that.
It is requiring good cooperation with the OS as it should be solved there. A metadata server backup is no backup but better described as an offloaded version.
In your grid environment you have some shared or duplicated data. When it is fully clustered spread over different locations you can say DR is implemented, but you will still need versions being backupped/covered.
The standard way today is incremental backups. The disadvantage will be very long restoration times. This policy is a dogma but not necessarily the only technical option.
Those guys just do not tell everything.