You're simply not going to get that kind of compression in your traditional storage or online systems. If you look at what NetApp does, I think they guarantee a 2:1, and you might get better than that, but I think they guarantee no more than 2:1.
The practical limitation to it really is the rehydration of the IOPS that are necessary to pull that storage back. An ***ogy might be a highly normalized database. The nice thing about a normalized database is you really shrink in size. The bad news is to pull up an individual record, instead of going to that one record and pulling it up with a single I/O, now maybe you need two, three, four, 10 different I/Os in order to get that single record. It's the same issue when you talk about deduplication, so you're putting more IOPS and performance pressure on your array in order to take advantage of that deduplication.