I'm trying to improve the system I use regularly backups using Amazon S3 as a storage platform. S3 (and other Amazon cloud services) have two important advantages, first availability, the order of 99.99% and on the other the price, highly competitive, and charge for space used by requests made but still comes out a spectacular price.
Our backup is about 40gb, and if the system goes well, I plan to also use it for personal backups (especially photos ).
The basic requirement is that the synchronization is done in an automated way from a Linux server with the system to allow sending desatentido.
Looking around as it found FuseOverAmazon , a system based on Fuse that allows mounting a "bucket" of S3 as if it were a local drive on which subsequently could use rsync . What more could you want? Said and done, let's try it. In my case I use CentOS .
- yum install fuse fuse-devel curl-devel libxml2-devel
- . tar .gz http://s3fs.googlecode.com/files/s3fs-r191- wget source. tar. gz
- . tar .gz tar xvfz s3fs-R191-source. tar. gz
- cd s3fs
- make install
Let's try it.
- TUSECRETKEY /mnt/s3 / Usr/bin/s3fs accessKeyId nombrebucket-o-o = TUACCESSKEYID secretAccessKey = TUSECRETKEY / mnt/s3
If all went well you will have mounted on / mnt/s3 your "nombrebucket" and you can list the files, copy, delete, etc.., As if it were a drive on your computer. So far everything has gone well. We can only synchronize your backup:
- / Usr / bin / rsync-avz-delete / usr1 / mnt/s3
And here is where the problem comes. In my case it's been 4 days and still has not exceeded 10% of the timing, everything works fine but the timing is extremely slow, do not know if I'm doing something wrong, whether it is normal or not, but it is impossible to use well .
As the idea has not been all good, we have a plan B. It involves using s3sync a Ruby script that makes the process very simple, only need to configure your access data indicating and run:
- s3sync-r / mnt / backup nombrebucket: prefix
Where "prefix" can be null.
This will send nombrebucket / prefix / your backup. So far the evidence is much more satisfactory than s3fs, speed can be considered more than adequate, especially compared to the previous.
As I said, for now I'm testing the performance and speed, but I'm not entirely convinced, so I'm thinking of using Amazon EC2 instead of S3 , so throwing an instance of a virtual machine to make a classic rsync against a real filesystem. The advantage is that I can throw the virtual machine only when you need to stop her after, so with an hour a day could be enough, remember that Amazon EC2 costs, among other things, for every time you use the instance. Additionally it could be done after backup dump from EC2 to S3, but in our case the 40gb would be a limitation that price would raise the price considerably even in the case of using weekly rotations.
I'll tell you .