While writing out this post, I decided to take a look around and run a couple of benchmarks; in the process I found a solution for my problem. I'll still post, just in case someone should run into the same issues I did.
I've been a satisfied owner of a DS1010+ w/expansion for a couple of years now, but growing capacity needs and failing disks are forcing me to step up to a DS1812+. Both DSes run the current DSM 4.0 (2216); the DS1010+ is running a degraded RAID 6 on ten 2 TB WDs (one of which recently died); the DS1812+ has a newly installed SHR-2 on 7x3 TB and 1x2 TB. The odd 2 TB drive is in there make sure I'll be able to expand the SHR-2 volume to 8x3 TB plus a full 5x2 TB expansion bay. All drives' partitions are correctly aligned to 4K boundaries.
On to my problem: Transfer speeds from the DS1010+ to the DS1812+ are dysmally slow. Right now, I'm working with rsync through ssh and can't seem to exceed some 4 MB/s. I've upped the MTU to 9000, linked the DSes directly through a crosslink cable -- the transfer speed doesn't seem to pick up.
Solution: After a bit of experimentation, I decided to try to use rsync in the rsync protocol mode (i.e. rsync --daemon on the DS1010+, rsync root@...::Share_Name on the DS1812+). I got around 3.5 MB/s, but also noticed rsync --daemon occupying around 24.5% CPU, which basically means it used 100% of one Core on the DS1010+'s Atom D510.
Up to that point, I had always used rsync with my standard set of params: -avz, with the "z" enabling compression. Skipping that, and just running rsync -av (archive, verbose), transfer speed is up to some 40-45 MB/s at around 19% of daemon CPU use. I've some other things on my mind right now, but --whole-file might hold some further optimization potential.
Anyhow, lession learnt: Skip compression if you're on an Atom CPU and GigE