Another Big Milestone for Servo—Acid2

Jack Moffitt

27

Servo, the next-generation browser engine being developed by Mozilla Research, has reached an important milestone by passing the Acid2 test. While Servo is not yet fully web compatible, passing Acid2 demonstrates how far it has already come.

Servo’s Acid2 Test Result

Acid2 tests common HTML and CSS features such as tables, fixed and absolute positioning, generated content, paint order, data URIs, and backgrounds. Just as an acid test is used to judge whether some metal is gold, the web compatibility acid tests were created to expose flaws in browser rendering caused by non-conformance to web standards. Servo passed the Acid1 test in August of 2013 and has rapidly progressed to pass Acid2 as of March 2014.

Servo’s goals are to create a new browser engine for modern computer architectures and security threat models. It is written in a new programming language, Rust, also developed by Mozilla Research, which is designed to be safe and fast. Rust programs should be free from buffer overflows, reusing already freed memory, and similar problems common in C and C++ code. On top of this added safety, Servo is designed to exploit the parallelism of modern computers making use of all available processor cores, GPUs, and vector units.

The early results are encouraging. Many kinds of browser security bugs, such as vulnerabilities similar to Heartbleed, are prevented automatically by the Rust compiler. Performance comparisons on many portions of the Web Platform that we have implemented in single threaded mode are substantially faster than traditional browsers, and multi-threaded performance is even faster yet.

Servo has a growing community of developers and is a great project for anyone looking to play with browsers and programming languages. Please visit us at the Servo project page to learn more.

References

edited for clarity around Heartbleed

Introducing the ‘mozjpeg’ Project

Josh Aas

0

Today I’d like to announce a new Mozilla project called ‘mozjpeg’. The goal is to provide a production-quality JPEG encoder that improves compression while maintaining compatibility with the vast majority of deployed decoders.

Why are we doing this?

JPEG has been in use since around 1992. It’s the most popular lossy compressed image format on the Web, and has been for a long time. Nearly every photograph on the Web is served up as a JPEG. It’s the only lossy compressed image format which has achieved nearly universal compatibility, not just with Web browsers but all software that can display images.

The number of photos displayed by the average Web site has grown over the years, as has the size of those photos. HTML, JS, and CSS files are relatively small in comparison, which means photos can easily make up the bulk of the network traffic for a page load. Reducing the size of these files is an obvious goal for optimization.

Production JPEG encoders have largely been stagnant in terms of compression efficiency, so replacing JPEG with something better has been a frequent topic of discussion. The major downside to moving away from JPEG is that it would require going through a multi-year period of relatively poor compatibility with the world’s deployed software. We (at Mozilla) don’t doubt that algorithmic improvements will make this worthwhile at some point, possibly soon. Even after a transition begins in earnest though, JPEG will continue to be used widely.

Given this situation, we wondered if JPEG encoders have really reached their full compression potential after 20+ years. We talked to a number of engineers, and concluded that the answer is “no,” even within the constraints of strong compatibility requirements. With feedback on promising avenues for exploration in hand, we started the ‘mozjpeg’ project.

What we’re releasing today, as version 1.0, is a fork of libjpeg-turbo with ‘jpgcrush’ functionality added. We noticed that people have been reducing JPEG file sizes using a perl script written by Loren Merritt called ‘jpgcrush’, references to which can be found on various forums around the Web. It losslessly reduces file sizes, typically by 2-6% for PNGs encoded to JPEG by IJG libjpeg, and 10% on average for a sample of 1500 JPEG files from Wikimedia. It does this by figuring out which progressive coding configuration uses the fewest bits. So far as we know, no production encoder has this functionality built in, so we added it as the first feature in ‘mozjpeg’.

Our next goal is to improve encoding by making use of trellis quantization. If you want to help out or just learn more about our plans, the following resources are available:

* github
* mailing list

Studying Lossy Image Compression Efficiency

Josh Aas

0

JPEG has been the only widely supported lossy compressed image format on the Web for many years. It was introduced in 1992, and since then a number of proposals have aimed to improve on it. A primary goal for many proposals is to reduce file sizes at equivalent qualities.

We’d like to share a study that compares three frequently-discussed alternatives, HEVC-MSP, WebP, and JPEG XR, to JPEG, in terms of compression efficiency.

The data shows HEVC-MSP performing significantly better than JPEG and the other formats we tested. WebP and JPEG XR perform better than JPEG according to some quality scoring algorithms, but similarly or worse according to others.

We consider this study to be inconclusive when it comes to the question of whether WebP and/or JPEG XR outperform JPEG by any significant margin. We are not rejecting the possibility of including support for any format in this study on the basis of the study’s results. We will continue to evaluate the formats by other means and will take any feedback we receive from these results into account.

In addition to compression ratios, we are considering run-time performance (e.g. decoding time), feature set (e.g. alpha, EXIF), time to market, and licensing. However, we’re primarily interested in the impact that smaller file sizes would have on page load times, which means we need to be confident about significant improvement by that metric, first and foremost.

We’d like to hear any constructive feedback you might have. In particular, please lets us know if you have questions or comments about our code, our methodology, or further testing we might conduct.

Also, the four image quality scoring algorithms used in this study (Y-SSIM, RGB-SSIM, IW-SSIM, and PSNR-HVS-M) should probably not be given equal weight as each has a number of pros and cons. For example: some have received more thorough peer review than others, while only one takes color into account. If you have input on which to give more weight please let us know.

We’ve set up a thread on Google Groups in order to discuss.

Mozilla Research Party—April 2, 6pm

Dave Herman

0

Mozilla Research is starting an informal gathering of academics and practitioners excited about expanding the foundations of the Open Web. The first event is held on Tuesday, April 2nd at the beautiful Mozilla office at 2 Harrison St, San Francisco, CA, overlooking the magnificent Bay Bridge, starting at 6:00pm.

We will kick off the event with a keynote speech by Andreas Gal, VP of Mobile Engineering and Research, followed by lightning talks on several of the projects we’ve been working on: Parallel JavaScript, Rust, Servo, asm.js, Emscripten and Shumway.

The real goal is to socialize and exchange ideas. The talk section of this event will be streamed live on Air Mozilla.

Please RSVP through Eventbrite to help us plan refreshments. We hope to see you there!

Mozilla Research projects

The Shumway Open SWF Runtime Project

Jet Villegas

64

Shumway is an experimental web-native runtime implementation of the SWF file format. It is developed as a free and open source project sponsored by Mozilla Research. The project has two main goals:

1. Advance the open web platform to securely process rich media formats that were previously only available in closed and proprietary implementations.
2. Offer a runtime processor for SWF and other rich media formats on platforms for which runtime implementations are not available.

You can view live demo examples using Shumway. More adventurous users can download a Firefox beta build and install the test extension to preview SWF content on the web using Shumway. Please be aware that Shumway is very experimental, is missing features, contains many defects, and is evolving rapidly.

Mozilla’s mission is to advance the Open Web. We believe that we can offer a positive experience if we provide support for the SWF format that is still used on many web sites, especially on mobile devices where the Adobe Flash Player is not available.

The Open Web can be further advanced by making rich media capabilities, previously only available in Flash, also available in the native web browser stack. Shumway is an exciting opportunity to do that for SWF, and we welcome support from external contributors as we advance the technology. We are reaching out to technical users who are interested in contributing to the Shumway implementation in these five areas:

1. Core. This module includes the main file format parser, the rasterizer, and event system.
2. AVM1. JavaScript interpreter for ActionScript version 1 and version 2 bytecode.
3. AVM2.  JavaScript interpreter and JIT compiler for ActionScript version 3 bytecode.
4. Browser Integration handles the glue between the web browser and the Shumway runtime.
5. Testing/Demos. Add good demo and test files/links to Shumway.

More information can be found on the github links:
* https://github.com/mozilla/shumway/wiki
* https://github.com/mozilla/shumway/wiki/Running-the-Examples
* https://github.com/mozilla/shumway/wiki/Building-Firefox-Extension

The Shumway team is active on the #shumway IRC channel for real-time discussion. A technical mailing list is available here. The source code is available on github.