The web has an image problem. They appear simple, they seem friendly, and they look like they can't hurt but they are taking over the world. Images make up over half of data on pages and is growing fast. The number of requests for images is growing at a slower rate - it's not more images it's bigger images.
What can we do to optimise?
Today looking at formats and encoding, not conditional loading.
27 years old, 256 colours, simple transparency, supported everywhere. Most importantly: supports animation. Was patented but now expired.
Created in 1996 to circumvent GIF patents. 8-32 bit colour palettes, alpha transparency, supported in > IE8. Animation support (MNG) not supported.
The format is lossless - uses delta format encoding. Pixel is defined as difference from it's neighbours, areas of similar colour are mostly 0 which compressed well. Comressed using LZ77 and Huffman code - finds repetition (this is where 0 helps).
Delta filter optimisation, convert from PNG24 -> 8, tune Huffman bugger size, trade alpha with simple transparency. Tools like PNGCrush and OptiPNG do these for you.
22 years old, RGB colours, no transparency, no animation support.
It's a lossy format - you lose data when you compress -> uncompress. Most common format on the web (although PNG is growing slowly). Divides image into 8x8 pixel blocks, splits into luminance and chrominace. The details are best conveyed in luminance wheras we can tolerate more loss from chrominance. Chroma subsampling merges adjacent pixels, horizontal or vertical merging or horizontal and vertical merging. Tools often deduce subsampling based on quality. Pixels are now written as ~delta from the average (DCT + Quantisation), small differences get rounded to 0 which compresses better. Variables within these techniques are expressed as a quality setting. The more agressive you get the more an 8x8 pixel block will look like one colour.
JPEG is not suitable for all types of image, with things like text and logos you get noticeable artifacts in each 8x8 block as they are defined relative to the average but the average is not necessarily used at all if there's hard edges.
Tools: jpegran, ImageMagik, jpegrescan.
Can also contain metadata (exif data). Up to 120KB thumbnail can be contained in meta data! Remove metadata if at all possible.
Has baseline and progressive modes. Baseline loads from the top down, progressive loads low res first and then enhances. Baseline gets the data for pixels in sequence. Progressive interlaces the information for each 8x8 block so it can load each block as it goes along. Page index can be increased by 7% to 15% on DSL by using progressive. Progressive has all the top numbers at the start and the zeros at the end which means we can get more efficient compression. 10K or higher will be 3-10% smaller. 92.6% of images were using baseline. Historically browsers didn't take advantage of progressive but modern browsers do.
Marketing speak for pixel density of 2x or greater. (1 CSS pixel = 4 real pixels.) Often 3-4x file size. Double size, lower quality = sharper image, smaller file (takes advantage of JPEG quality level settings in tools?) More memory usage.
When to use each format?
- Tiny images: GIF or PNG8
- Small images: PNG
- Larger: prefer JPEG to PNG
Make image transcoding part of your process to take the necessity of choice away from end users.
These image formats are very old.
Algroythms can compare how similar images are when compressed using lossy formats (DSSIM). WebP and JXR are both better than JPEG. Beware of quality scales.
Supports lossless and lossy compression. Very well documented. May have patent issues. Can be 26-42% when compared to PNG, better when taking into account alpha. Also 25-37% better than JPEG. Has animation support. Currently only supported by blink based browsers.
JPEG XR (extended range)
Bit bigger than WebP. Patented but under "Open Spec Promise". Supported in Microsoft friendly browsers, really only IE10+.
Uses DWT vs DT, about 30% smaller than JPEG. Patened but free to use. Supported on Safari (iOS and OSX).
Mozilla thinks this is better but still undecided.
Detect browser support and conditionally load, possibly accept encoding header. Different format per URL or same URL different responses but be aware of caching.
Image memory usage
When browsers use an image they have to decompress them so images can take up a lot of memory even if they are small file size when compressed. [AFIK: this goes for SVG as well.]