Losslessly compressing images

Enonic version: 4.7.10
OS: OSX

I’m trying to optimize our site for Google’s PageSpeed score (example), and I’m getting a lot of suggestions that:

Losslessly compressing [image file] could save x KiB (Y% reduction)

I have optimized static assets (icons and the like) manually and through imagemin (as a part of our build pipeline) - but how do I add lossless compression to images served through Enonic?

The image URLs are created via portal:createImageUrl(key, '', '', 'png | jpeg', 75), where 75 is (I assume) the lossy compression “quality” measure.

Hmm png’s are by default using lossless compression… Was not aware of any options on this?

In general using JPEG gives better compression naturally - The 75% refers to this.

In 6.2 we’re introducing perfect cache for images, assets and attachments.

Alright - so this is something we can’t improve upon using our current version of Enonic?

Try doing png instead of jpeg.
PNG is lossless, while jpeg is lossy (even if quality is 100%, the jpeg algorithm will not return an exact replication of the original, unless the picture has only one colour).

Icons usually have a few very distinct colours. The lossless compression techniques used in png are ideal for this kind of images, while jpeg compression is much better suited for real life pictures with millions of colours.

If you want to understand why, keep reading: :stuck_out_tongue:
Imagine for example an icon with 4 colours: The png format may start by assigning the 4 exact colours to 4 2-bit values, then recreate the bitstream of the image with these 2-bit values instead of 16-bit or 24-bit values. After that, png may compress the result with some kind of normal file compression technique. (I use the word MAY here, because png have multiple compression techniques and use the optimal one for each image. This example is one of the easier techniquest to understand.)

The jpeg format on the other hand, bases it’s compression on wave patterns in the image. So, imagine an image of some kind of flag that is black and white, then compress it with jpeg. Close to every border between the black and white areas, jpeg will create grey pixels, to reduce the contrast and thus the “wave size” in the image. The less quality demanded, the bigger the grey areas around the contrasting areas will be. This is also easy to test. It is very visible.

For images like icons or flags, jpeg images can actually end up being a lot larger than a png original even with poor quality, because of the sharp contrast between the specific colours. For pictures, especially pictures with large areas of slowly changing colour tones, like the sky, or skin, this technique is ideal and can recreate the orignal with great compression.

There are different levels of efficiency of PNG compression. Perhaps that’s what Google is referring to? When I create PNG files (in this case as static assets) for an Enonic-based website, after doing a “save for web” in Photoshop I always run the images through ImageOptim and I’m able to shave a few extra percent off the file size.

From their website:

ImageOptim seamlessly integrates the best optimization tools: PNGOUT, Zopfli, Pngcrush, AdvPNG, extended OptiPNG, JpegOptim, jpegrescan, jpegtran, and Gifsicle.

EDIT: Sorry, I see now that you already mentioned that you ran the static assets through imagemin as part of your build pipeline. Still, I suspect this is what Google PageSpeed is referring to.