Don't see the killer app in this. It' s not like it takes hours to unzip "small" files and in the end that it was 99.9% of users do use it for.
With any CPU from the last 6+ years, it's not slow enough zipping files to worry about the speed, either. The algorithm as normally implemented is already very fast.
Now, if you could get LZMA2 or PBZIP2 GPU-accelerated, that'd be something. I imagine PBZIP2 should be possible, though not easy, to do.
hard to imagine CUDA winning even if amd didn't existed.
It won't. For all AMD's bark, which I hoped they'd follow up on, NVidia and Intel have had the bite. NVidia is keeping CUDA cutting-edge, for their hardware, and supporting other standards as well as anybody else, if not better. They would be in fine shape tomorrow (well, at least in GPGPU software terms) if everyone abruptly decided to not start any new CUDA projects.
I think the use could be more substantial than you'd think. For example, when Intel added AES I thought that was a waste of silicon, because I had never needed *that* much encryption (and hey, today's CPUs are fast enough anyways!). Now, we have disk-wide encryption.
It saves power on notebooks for users that need it, and is quite useful for file servers.
I could defintely make the argument for disk-wide compression as well. For example, I have one folder that is 630MB (work-related stuff), that compresses down to 42MB with WinRAR. Now I realize many things won't compress that well, but imagine if we could start saving everything in lossless uncompressed formats, with the compression happening in real-time? That would be cool beans...
I've been doing that since 1997 (NT4--the same as saying 'since 2011' for something starting today ), when my largest HDD was 8GB. Today, I compress directories of games with mods, as they tend not to be neatly packaged in single large files, so NTFS compression helps quite a bit, for my mechanical drive.
With faster storage, however, the overhead outweighs the gains, bring it back to a pure space issue, where keeping unchanging data in archives works as well today as it did back when storage was expensive.
Getting the kind of compression you get with documents that you know are similar to each other, for the OS, FS, or drive, would need multi-pass statistical analysis, which would be best started at the time of formatting (1A. approximate compressibility of data 1B. approximate similarities amongst file contents 2. create a database of analyzed 1A and 1B data, then differentially compress, dedup, sort, and compress results as needed, using that database to more efficiently compress any new files).
What grouping appears easy to you at a high level is not so easy to implement effectively at a much lower level. It can be done, but not without a price. Going by filetype, FI, would leave you with either longer compression times as that amount of data increased, or reduced compression if done in small chunks, or the need for additional FS metadata processing overhead to find what small set of files it should be compressed along with, which would still result in lesser compression next to your choice of a single high-compression archive, and often necessitate several additional reads per file write (also, some formats these days compress themselves individually, which would result in poor compression after the fact).
It can be done, but the demand is lacking for reasons of cheap storage, performance, and that big stuff people would like smaller tends to already be compressed.