Yeah, this is to backup the site. Basically, we've got internal backups on the server. That stuff gets rsync'd offsite nightly. I need cheap, always-on storage with a fat enough upstream pipe to restore everything in a reasonable timeframe. I download occasional backups to a server here at the house, but it would literally take me days to upload that stuff if I needed to restore everything.
So yeah, this is just disaster recovery. A plane crashes into the data center, our hosting company quits paying the bills and gets locked out, the RAID card goes bonkers and corrupts all the discs, an earthquake lops SoCal off the map, etc. Or actually, the most likely scenario... some script kiddie gets root and rm -rf everything.
The more I looked into AWS and S3 specifically, the more it looked like it sucked. You could use S3FS to mount the cloud storage and copy to it, but because rsync thought it was local storage, you couldn't use SSH and all the inode stuff went across as an individual query for each file, so it was slow. You didn't get any multiplexing.