Categories: General

Mozilla Advances JPEG Encoding with mozjpeg 2.0

We’re pleased to announce the release of mozjpeg 2.0. Early this year, we explained that we started this project to provide a production-quality JPEG encoder that improves compression while maintaining compatibility with the vast majority of deployed decoders. The end goal is to reduce page load times and ultimately create an enhanced user experience for sites hosting images.

With today’s release, mozjpeg 2.0 can reduce file sizes for both baseline and progressive JPEGs by 5% on average compared to those produced by libjpeg-turbo, the standard JPEG library upon which mozjpeg is based [1]. Many images will see further reductions.

Facebook announced today that they are testing mozjpeg 2.0 to improve the compression of images on facebook.com. It has also donated $60,000 to contribute to the ongoing development of the technology, including the next iteration, mozjpeg 3.0.

“Facebook supports the work Mozilla has done in building a JPEG encoder that can create smaller JPEGs without compromising the visual quality of photos,” said Stacy Kerkela, software engineering manager at Facebook. “We look forward to seeing the potential benefits mozjpeg 2.0 might bring in optimizing images and creating an improved experience for people to share and connect on Facebook.”

The major feature in this release is trellis quantization, which improves compression for both baseline and progressive JPEGs without sacrificing anything in terms of compatibility. Previous versions of mozjpeg only improved compression for progressive JPEGs.

Other improvements include:

  • The cjpeg utility now supports JPEG input in order to simplify re-compression workflows.
  • We’ve added options to specifically tune for PSNR, PSNR-HVS-M, SSIM, and MS-SSIM metrics.
  • We now generate a single DC scan by default in order to be compatible with decoders that can’t handle arbitrary DC scans.

New Lossy Compressed Image Research

Last October, we published research that found HEVC-MSP performed significantly better than JPEG, while WebP and JPEG XR performed better than JPEG according to some quality scoring algorithms, but similarly or worse according to others. We have since updated the study to offer a more complete picture of performance for mozjpeg and potential JPEG alternatives.

The study compared compression performance for four formats: JPEG, WebP, JPEG XR, and HEVC-MSP. The following is a list of significant changes since the last study:

  • We use newer versions of the WebP, JPEG, JPEG XR, and HEVC-MSP encoders.
  • We include data for mozjpeg.
  • We changed our graphing to bits per pixel vs. dB (quality) on a log/log scale. This is a more typical presentation format, and it doesn’t require interpolation.
  • We removed an RGB conversion step from quality comparison. We now compare the Y’CbCr input and output directly. This should increase accuracy of the metrics.
  • We include results for more quality values.
  • We added sections discussing encoders tuning for metrics and measurement with luma-only metrics.

We’ve also made changes to our test suite to make it easier to reproduce our results. All metric code is now written in C, which means it runs faster and MATLAB/octave is no longer required. We’ve also added a script to automatically generate graphs from the test data files.

We consider this study to be inconclusive when it comes to the question of whether WebP and/or JPEG XR outperform JPEG by any significant margin. We are not rejecting the possibility of including support for any format in this study on the basis of the study’s results. We will continue to evaluate the formats by other means and will take any feedback we receive from these results into account.

In addition to compression ratios, we are considering run-time performance (e.g. decoding time), feature set (e.g. alpha, EXIF), time to market, and licensing. However, we’re primarily interested in the impact that smaller file sizes would have on page load times, which means we need to be confident about significant improvement by that metric, first and foremost.

Feedback Welcome

We’d like to hear any constructive feedback you might have. In particular, please let us know if you have questions or comments about our code, our methodology, or further testing we might conduct.

Also, the four image quality scoring algorithms used in this study (Y-SSIM, RGB-SSIM, MS-SSIM, and PSNR-HVS-M) should probably not be given equal weight as each has a number of pros and cons. For example: some have received more thorough peer review than others, while only one takes color into account. If you have input on which to give more weight please let us know.

We’ve set up a thread on Google Groups in order to discuss.

1. We’re fans of libjpeg-turbo – it powers JPEG decoding in Firefox because its focus is on being fast, and that isn’t going to change any time soon. The mozjpeg project focuses solely on encoding, and we trade some CPU cycles for smaller file sizes. We recommend using libjpeg-turbo for a standard JPEG library and any decoding tasks. Use mozjpeg when creating JPEGs for the Web.

No comments yet

Post a comment

Leave a Reply

Your email address will not be published. Required fields are marked *