In my last post on gzip, I discovered that gzip can compress data in a more sync-friendly way. This totally unrelated blog entry from nginx discusses a new gunzip filter that decompresses compressed data for clients that don’t support gzip.
I was thinking about this the other day. Why not store all your content compressed, then you can just quickly use
sendfile() or some other fast method to deliver data directly to a client, and decompress the compressed data for clients that don’t support it?
- Decompressing is always faster than compressing (apples to apples).
- You get to save storage space.
- You could potentially reduce your IO by a large margin (over the network obviously, but also inside the box).
- Since nearly every web browser in use today supports compression, you’d use it almost all the time. It’s the default case now, not the edge case.
There you have it. Compress to impress. Maybe we’ll see a return to the days of using compressed filesystems, but with multiple entry points depending on whether you want to get the data in a compressed or uncompressed form, like mounting a block device from
/uncomp to retrieve a decompressed file, and a
/comp mount point to get files in the native compressed form.