Does client have a cost effective retrieval algorithim?

Discussion room for Glacier Backup package
Forum rules
We've moved! Head over to Synology Community ( to meet up with our team and other Synology enthusiasts!
I'm New!
I'm New!
Posts: 2
Joined: Sat Nov 05, 2016 7:38 am

Does client have a cost effective retrieval algorithim?

Unread post by Tigerman67 » Thu Aug 03, 2017 10:03 pm

Kicked off some backup tasks to Glacier using the Synology Glacier client.
So far so good, they are grinding away.
I checked with amazon and found the '# of request' upload charges to be surprising since I wasn't really aware of them, so I wanted to make sure I understood retrieval costs if I ever needed them. .
If done poorly, it could be enormously expensive.
I found once example (probably not current pricing) that said if you tried to retrieve a single 2TB file in a single request, the cost would be $4746.26, of which $0.01 would be request fees. If you had 2 million files using up 2TB and retrieve them across 42 different jobs that did not run concurrently, the cost would be $471.92 (of which $110 would be request fees).

So the algorithm used to do retrieval can obviously be optimized to lower costs. You only have so much bandwidth anyway, so spreading requests out to optimize costs seems to be in your best interest if you ever have to do a larger restore.

Does the synology client optimize those retrieval requests, and if not, does one of the other 3rd party clients provide that feature? If all that fails maybe write a custom job on S3 to optimize retrieval there and download back to synology from there?


Return to “Glacier Backup”