De-Duplication is always a drag unless you have big memory, and big processing power. Even in the Enterprise market, de-dupe stays off unless capacity needs far outstrip performance needs (like on large, redundant sets of archived snapshots).
I work with the Crashplan guys quite a bit on multiple OS's (Solaris, Linux, Windows, and OSX), and the behavior is the same. Because of the large memory databases, and the overhead of sending bits back and forth to be compared and tested, de-dupe is a dog. Myself and some of the Engineers at Crashplan have discussed this at length. The DDT table used by Crashplan isn't anymore special than the systems employed by ZFS, NetApp, EMC, etc. They're all very similar with each implementing their own special sauce to hopefully make De-Dupe more powerful, but not necessarily faster.
Turn off de-dupe, and you'll see your speeds skyrocket I promise. If you don't need the space, then there's no reason to leave it on. You can even transfer your data now, get it to Crashplan, and then once your seed is up, you can enable full De-Dupe on the Backup set later, and have it do so on the next maintenance cycle (with the penalty of an extremely long maintenance cycle). But at least your data will be up there and usable.