Using VAPID with WebPush

This post continues discussion about using the evolving WebPush feature.
Updated Jul, 21st 2016 to fix incorrect usage of “aud” in VAPID header

One of the problems with offering a service that doesn’t require identification is that it’s hard to know who’s responsible for something. For instance, if a consumer is having problems, or not using the service correctly, it is a challenge to contact them. One option is to require strong identification to use the service, but there are plenty of reasons to not do that, notably privacy.

The answer is to have each publisher optionally identify themselves, but how do we prevent everyone from saying that they’re something popular like “CatFacts”? The Voluntary Application Server Identification for Web Push (VAPID) protocol was drafted to try and answer that question.

Making a claim

VAPID uses JSON Web Tokens (JWT) to carry identifying information. The core of the VAPID transaction is called a “claim”. A claim is a JSON object containing several common fields. It’s best to explain using an example, so let’s create a claim from a fictional CatFacts service.
"aud": "",
"exp": 1458679343,
"sub": ""

The “audience” is the destination URL of the push service.
The “expiration” date is the UTC time in seconds when the claim should expire. This should not be longer than 24 hours from the time the request is made. For instance, in Javascript you can use: Math.floor( * .001 + 86400).
The “subscriber” is the primary contact email for this subscription. You’ll note that for this, we’ve used a generic email alias rather than a specific person. This approach allows multiple people to be alerted, or assigning a new person without having to change code.

Note: JWT allows for additional fields to be provided, the above are just the “known” set. Feel free to add any additional information that you may want to include when the Push Service needs to contact you. (e.g. AMI-ID of the publishing machine, original customer ID, etc.) Be advised, that you have a limited amount of information you can put in your headers. For instance, Apache allows only 4K for all header information. Keeping items short is to your benefit)

I’ve added spaces and new lines to make things more readable. JWT objects normally strip those out.

Signing and Sealing

A JWT object actually has three parts: a standard header, the claim (which we just built), and a signature.

The header is very simple and is standard to any VAPID JWT object.

{"typ": "JWT","alg":"ES256"}

If you’re curious, typ is the “type” of object (a “JWT”), and alg is the signing algorithm to use. In our case, we’re using Elliptic Curve Cryptography based on the NIST P-256 curve (or “ES256”).

We’ve already discussed what goes in the claim, so now, there’s just the signature. This is where things get complicated.

Here’s code to sign the claim using Python 2.7.

# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at

import base64
import time
import json

import ecdsa
from jose import jws

def make_jwt(header, claims, key):
    vk = key.get_verifying_key()
    jwt = jws.sign(
        algorithm=header.get("alg", "ES256")).strip("=")
    # The "0x04" octet indicates that the key is in the
    # uncompressed form. This form is required by the
    # server and DOM API. Other crypto libraries
    # may prepend this prefix automatically.
    raw_public_key = "\x04" + vk.to_string()
    public_key = base64.urlsafe_b64encode(raw_public_key).strip("=")
    return (jwt, public_key)

def main():
    # This is a standard header for all VAPID objects:
    header = {"typ": "JWT", "alg": "ES256"}

    # These are our customized claims.
    claims = {"aud": "",
              "exp": int(time.time()) + 86400,
              "sub": ""}

    my_key = ecdsa.SigningKey.generate(curve=ecdsa.NIST256p)
    # You can store the private key by writing
    #   my_key.to_pem() to a file.
    # You can reload the private key by reading
    #   my_key.from_pem(file_contents)

    (jwt, public_key) = make_jwt(header, claims, my_key)

    # Return the headers we'll use.
    headers = {
        "Authorization": "Bearer %s" % jwt,
        "Crypto-Key": "p256ecdsa=%s" % public_key,

    print json.dumps(headers, sort_keys=True, indent=4)


There’s a little bit of cheating here in that I’m using the “python ecdsa” library and JOSE‘s jws library, but there are similar libraries for other languages. The important bit is that a key pair is created.

This key pair should be safely retained for the life of the subscription. In most cases, just the private key can be retained since the public key portion can be easily derived from it. You may want to save both private and public keys since we’re working on a dashboard that will use your public key to let you see info about your feeds.

The output of the above script looks like:

    "Authorization": "Bearer eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJodHRwczovL2NhdGZhY3RzLmV4YW1wbGUuY29tIiwiZXhwIjoxNDU4Njc5MzQzLCJzdWIiOiJtYWlsdG86d2VicHVzaF9vcHNAY2F0ZmFjdHMuZXhhbXBsZS5jb20ifQ.U8MYqcQcwFcK2UkeiISahgZFvaOw56ZQvHYZc4zXC2Ed48-lk3MoYExGagKLwr4lSdbARZEbblAprQfXlap3jw",
    "Crypto-Key": "p256ecdsa=EJwJZq_GN8jJbo1GGpyU70hmP2hbWAUpQFKDByKB81yldJ9GTklBM5xqEwuPM7VuQcyiLDhvovthPIXx-gsQRQ=="

These are the HTTP request headers that you should include in the POST request when you send a message. VAPID uses these headers to identify a subscription.

The “Crypto-Key” header may contain many sub-components, separated by a semi-colon (“;”). You can insert the “p256ecdsa” value, which contains the public key, anywhere in that list. This header is also used to relay the encryption key, if you’re sending a push message with data. The JWT is relayed in the “Authorization” header as a “Bearer” token. The server will use the pubic key to check the signature of the JWT and ensure that it’s correct.

Again, VAPID is purely optional. You don’t need it if you want to send messages. Including VAPID information will let us contact you if we see a problem. It will also be used for upcoming features such as restricted subscriptions, which will help minimize issues if the endpoint is ever lost, and the developer dashboard, which will provide you with information about your subscription and some other benefits. We’ll discuss those more when the features becomes available. We’ve also published a few tools that may help you understand and use VAPID. The Web Push Data Test Page (GitHub Repo) can help library authors develop and debug their code by presenting “known good” values. The VAPID verification page (GitHub Repo) is a simpler, “stand alone” version that can test and generate values.

As always, your input is welcome.

Updated to spell out what VAPID stands for.

TTL Header Requirement Relaxed for WebPush

A few weeks ago, we rolled out a version of the WebPush server that required a HTTP TTL header to be included with each subscription update. A number of publishers reported issues with the sudden change, and we regret not properly notifying all parties.

We’re taking steps to solve that problem, including posting here and on Twitter.

To help make it easier for publishers and programmers to experiment with WebPush, we have temporarily relaxed the TTL requirement. If a recipient’s computer is actively connected, the server will accept and deliver a WebPush notification submitted without a HTTP TTL header. This is similar to the older behavior where updates would be accepted with a presumed TTL value of 0 (zero), because the server was able to immediately deliver the notification. If the recipient is not actively connected and the TTL header is missing, the server will return a HTTP status of 400 and the following body:

{“status”: 400, “errno”: 111, “message”: “Missing TTL Header, see:”}

It’s our hope that this error is descriptive enough to help programmers understand what the error is and how to best resolve it. Of course, we’re always open to suggestions on how we can improve this.

Also, please note that we consider this a temporary fix. We prefer to stay as close to the standard as possible, and will eventually require a TTL header for all WebPush notifications. We’re keeping an eye on our error logs and once we see the number of incorrect calls fall below a certain percentage of the overall calls, we’ll announce the end of life of this temporary fix.

WebPush’s New Requirement: TTL Header

WebPush is a new service where web applications can receive notification messages from servers. WebPush is available in Firefox 45 and later and will be available in Firefox for Android soon. Since it’s a new technology, the WebPush specification continues to evolve. We’ve been rolling out the new service and we saw that many updates were not reaching their intended audience.

Each WebPush message has a TTL (Time To Live), which is the number of seconds that a message may be stored if the user is not immediately available. This value is specified as a TTL: header provided as part of the WebPush message sent to the push server. The original draft of specification stated that if the header is missing, the default TTL is zero (0) seconds. This means if the TTL header was omitted, and the corresponding recipient was not actively connected, the message was immediately discarded. This was probably not obvious to senders since the push server would return a 201 Success status code.

Immediately discarding the message if the user is offline is probably not what many developers expect to happen. The working group decided that it was better for the sender to explicitly state the length of time that the message should live. The Push Service may still limit this TTL to it’s own maximum. In any case, the Push Service server would return the actual TTL in the POST response.

You can still specify a TTL of zero, but it will be you setting it explicitly, rather than the server setting it implicitly. Likewise if you were to specify TTL: 10000000, and the Push Service only supports a maximum TTL of 5,184,000 seconds (about one month), then the Push Service would respond with a TTL:5184000

As an example,

curl -v -X POST \
-H "encryption-key: keyid=p256dh;dh=..." \
-H "encryption: keyid=p256dh;salt=..." \
-H "content-encoding: aesgcm128" \
-H "TTL: 60" \

> POST /push/LongStringOfStuff HTTP/1.1
> User-Agent: curl/7.35.0
> Host:
> Accept: */*
> encryption-key: ...
> encryption: ...
> content-encoding: aesgcm128
> TTL: 60    
> Content-Length: 36
> Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 36 out of 36 bytes
< HTTP/1.1 201 Created
< Access-Control-Allow-Headers: content-encoding,encryption,...
< Access-Control-Allow-Methods: POST,PUT
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: location,www-authenticate
< Content-Type: text/html; charset=UTF-8
< Date: Thu, 18 Feb 2016 20:33:55 GMT
< Location:
< TTL: 60
< Content-Length: 0
< Connection: keep-alive

In this example, the message would be held for up to 60 seconds before either the recipient reconnected, or the message was discarded.

If you fail to include a TTL header, the server will respond with an HTTP status code of 400. The result will be similar to:

< HTTP/1.1 400 Bad Request
< Access-Control-Allow-Headers: content-encoding,encryption,...
< Access-Control-Allow-Methods: POST,PUT
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: location,www-authenticate
< Content-Type: application/json
< Date: Fri, 19 Feb 2016 00:46:43 GMT
< Content-Length: 84
< Connection: keep-alive
{"errno": 111, "message": "Missing TTL header", "code": 400, "error": "Bad Request"}

The returned error will contain a JSON block that describes what went wrong. Refer to our list of error codes for more detail.

We understand that the change to require the TTL header may have not reached everyone, and we apologize about that. We’re going to be “softening” the requirement soon. The server will return a 400 only if the remote client is not immediately connected, otherwise we will accept the WebPush with the usual 201. Please understand that this relaxation of the spec is temporary and we will return to full specification compliance in the near future.

We’re starting up a Twitter account, @MozillaWebPush, where we’ll post announcements, status, and other important information related to our implementation of WebPush. We encourage you to follow that account.

Shutting down the legacy Sync service

In response to strong user uptake of Mozilla’s new Sync service powered by Firefox Accounts, earlier this year we announced a plan to transition users off of our legacy Sync infrastructure and onto the new product.  With this migration now well under way, it is time to settle the details of a graceful end-of-life for the old service.

We will shut down the legacy Sync service on September 30th 2015.

We encourage all users of the old service to upgrade to a Firefox Account, which offers a simplified setup process, improved availability and reliability, and the possibility of recovering your data even if you lose all of your devices.

Users on Firefox 37 or later are currently being offered a guided migration process to make the experience as seamless as possible.  Users on older versions of Firefox will see a warning notice and will be able to upgrade manually.  Users running their own Sync server, or using a Sync service hosted by someone other than Mozilla, will not be affected by this change.

Update: shutdown of the legacy Sync service has been completed.   Users who are yet to migrate off the service will be offered the guided upgrade experience until Firefox 44.  Firefox 44 and later will automatically and silently disconnect from legacy Sync.

We are committed to making this transition as smooth as possible for Firefox users.  If you have any questions, comments or concerns, don’t hesitate to reach out to us on or in #sync on Mozilla IRC.




  • What will happen on September 30th 2015?

After September 30th, we will decommission the hardware hosting the legacy Sync service and discard all data stored therein.  The corresponding DNS names will be redirected to static error pages, to ensure that appropriate messaging is provided for users who have yet to upgrade to the new service.

  • What’s the hurry? Can’t you just leave it running in maintenance mode?

Unfortunately not.  While we want to ensure as little disruption as possible for our users, the legacy Sync service is hosted on aging hardware in a physical data-center and incurs significant operational costs.  Maintaining the service beyond September 30th would be prohibitively expensive for Mozilla.

  • What about Extended Support Release (ESR)?

Users on the ESR channel have support for Firefox Accounts and the new Sync service as of Firefox 38.  Previous ESR versions reach end-of-life in early August and we encourage all users to upgrade to the latest version.

  • Will my data be automatically migrated to the new servers?

No, the strong encryption used by both Sync systems means that we cannot automatically migrate your data on the server.  Once you complete your account upgrade, Firefox will re-upload your data to the new system (so if you have a lot of bookmarks, you may want to ensure you’re on a reliable network connection).

  • Are there security considerations when upgrading to the new system?

Both the new and old Sync systems provide industry-leading security for your data: client-side end-to-end encryption of all synced data, using a key known only to you.

In legacy Sync this was achieved by using a complicated pairing flow to transfer the encryption key between devices.  With Firefox Accounts we have replaced this with a key securely derived from your account password.  Pick a strong password and you can remain confident that your synced data will only be seen by you.

  • Does Mozilla use my synced data to market to me, or sell this data to third parties?

No.  Our handling of your data is governed by Mozilla’s privacy policy which does not allow such use.  In addition, the strong encryption provided by Sync means that we cannot use your synced data for such purposes, even if we wanted to.

  • Is the new Sync system compatible with Firefox’s master password feature?

Yes.  There was a limitation in previous versions of Firefox that prevented Sync from working when a master password was enabled, but this has since been resolved.  Sync is fully compatible with the master password feature in the latest version of Firefox.

  • What if I am running a custom or self-hosted Sync server?

This transition affects only the default Mozilla-hosted servers.  If you are using a custom or self-hosted server setup, Sync should continue to work uninterrupted and you will not be prompted to upgrade.

However, the legacy Sync protocol code inside Firefox is no longer maintained, and we plan to begin its removal in 2016.  You should consider migrating your server infrastructure to use the new protocols; see below.

  • Can I self-host the new system?

Yes, either by hosting just the storage servers or by running a full Firefox Accounts stack.  We welcome feedback and contributions on making this process easier.

  • What if I’m using a different browser (e.g. SeaMonkey, Pale Moon, …)?

Your browser vendor may already provide alternate hosting.  If not, you should consider hosting your own server to ensure uninterrupted functionality.

The changing face of software quality


QA as a function in software is and has been changing. It is less about validating changes and writing test plans and more about pushing quality forward through tools, automation, and process refinement. We must work to improve the QA phase (everything between “dev complete” and “ready for deployment”), and quality throughout the whole life-cycle. In doing so QA becomes a facilitator, an ambassador, a shepherd of code, providing simple, confident, painless ways of getting things out the door. We may become reliability engineers, production engineers, tools engineers, infrastructure engineers, and/or release engineers. In smaller teams you may be many of these things, in bigger ones they may be distinct roles. The line between QA and Ops may blur, as might the line between Dev and QA. Manual QA still plays a large role, but it is in addition to, not the standard. It is another tool in the toolbox.

What is QA and Software Quality really?

In my mind the purpose of QA is not merely to find bugs or validate changes before release, but to ensure that the product that our users receive is of high quality. That can mean many different things, and the number of bugs is definitely a part of it, but I think that is a simplification of the definition of software quality. QA is changing in many companies these days, and we’re making a big push this year in Mozilla’s Cloud Services team to redefine what QA and quality means.

Quality software, to me, is about happy users. It is users who love and care about your product, users who will evangelize your product, and users that contribute feedback to make your product better. If you have those users, then you have a great product.

As QA we don’t write code, we don’t run the servers, we don’t decide what to build nor how it should look, but we do make sure that all of those things are up to the standards that we all set for ourselves and our products. If our goal in software is to make our users happy then giving them a quality experience that solves a problem for them and makes their lives easier is how we achieve that goal. This is what software is all about, and is something that everyone strives for. Quality really is everyone’s concern. It should not be an afterthought and it shouldn’t fall solely on QA. It is a mindset that needs to be ingrained in engineers, product managers, designers, managers, and operations. It is a part of company culture.

Aspects of Quality

When talking about quality it helps to have a clear idea what we mean when we say software should be of high quality.

  • Performance
    • load times
    • animation/ui smoothness
    • responsiveness to interactions
  • Stability
    • consistency, is it the same experience every time I use the product
    • limited downtime
  • Functionality
    • does it do what we said it would
    • does it do so ‘correctly’ (limited bugs)
    • does it solve a problem
  • Usability
    • is it simple, attractive, unobtrusive
    • does it frustrate or annoy
    • does it make sense
    • does it do what the user wants and expects

These aspects involve everyone. You can lobby for other aspects that you find important, and please do, this list is by no means exhaustive.

Great, so we’re thinking about ways that we can deliver quality software beyond test runs and checklists, and what it means to do so, but there’s a bit of an issue.

The Grand Quality Renaissance

A lot of what we do in traditional QA is 1:1 interactions with projects when there are changes ready to be tested for release. We check functionality, we check changes, we check for regression, we check for performance and load. This is ‘fine’, by most standards. It gets the job done, the product is validated, and it goes for release. All is good… Except that it doesn’t scale, the work you do doesn’t provide ongoing value to the team, or the company. As soon as you complete your testing that work is ‘lost’ and then you start from scratch with the next release.

In Cloud Services we have upwards of 20 projects that are somewhere between maintenance mode, active, or in planning. We simply don’t have the head count (5 currently) to manage all of those projects simultaneously, and even if we did, plans change. We have to react to the ongoing changes in the team, the company, and the world (well, at least as it relates to our products). Sometimes we won’t be able to react because 1:1 interactions aren’t flexible enough. Okay, simple answer: lets grow the team. That may work at times, but it also takes a lot of time, a lot of money, and even worse you might put yourself in a position where you have the head count to manage 20 projects, but suddenly there’s only 10 projects that we care about going forward and then we have to let some people go. That’s not fair to anyone – we need a better solution.

What we’re talking about, really, is how can we do more with less, how can we be more effective with the people and resources that we have. We need to do work that is effective and applicable to either:

  • many projects at once (which can be hard to do for tests, unless there is a lot of crossover, but possible with things like infrastructure and build tools), or
  • can be reused by the same project each time a change comes up for release (much more realistic for testing).

Once we have that work in place, then we know that if we grow the team it is deliberate, not reactionary. Being in control is good!

That doesn’t really mean anything concrete, so let’s put some goals in place give us something to work towards.

  • Improve 1:1 Person:Project effectiveness, scale without sacrificing quality
  • Implement quality measurement tooling and practices to track changes and improvements
  • Facilitate team scaling by producing work of ongoing value via automated tests and test infrastructure
  • Reduce deployment time/pain by automating build and release pipeline
  • Reduce communication overhead and back and forth between teams by shorten feedback loops
  • Increase confidence in releases through stability and results from our build/tools/test/infrastructure work
  • Reframe what quality means in cloud services through leadership and driving forward change

Achieving those goals means we as QA are more effective, but the side effects to the entire organization are numerous. We’ll see more transparency between teams, developers will receive feedback faster and be more confident in their work, they take control of their own builds and test runs removing back and forth communication delays, operations can one-click release to production with fewer concerns about late nights and rollbacks, more focus can be applied to upholding the aspects of quality by all rather than fighting fires, and ideally productivity, quality, and happiness go up (how do we gather some telemetry on that?).

We’re putting together the concrete plan for what this will look like at Mozilla’s Cloud Services. It will take a while to achieve these goals, and there will undoubtedly be issues along the way, but the end result is much more stable, reliable, and consistent. That last one is particularly important. If we’re consistent then we can measure ourselves, which leads to us being predictable. Being predictable, statistically speaking, means our decision making is much more informed. At a higher level, our focus shifts to making the best product we can, delivering it quickly, and iterating on feedback. We learn from our users and integrate their feedback into our products, without builds or bugs or tools or process getting in the way. We make users happy. When it comes down to it, that’s really what this is all about – that’s the utopia. Let’s build our utopia!

Combain Deal Helps Expand Mozilla Location Service

Last week, we signed an agreement with Combain AB, a Swedish company dedicated to accurate global positioning through wireless and cell tower signals.

The agreement lets Mozilla use Combain’s service as a fallback for Mozilla Location Service. Additionally, we’re exchanging data from our stumbling efforts to improve the overall quality of our databases.

We’re excited about both parts of this deal.

  • Having the ability to fall back to another provider in situations where we feel that our data set is too sparse to provide an acceptable location gives Mozilla Location Service users some extra confidence in the values provided by the service. We have also extended the terms of our deal with Google to use the Google Location Service on Firefox products, giving us excellent location tools across our product line.
  • Exchanging data helps us build out our database. Ultimately, we want this database to be available to the community, allowing companies such as Combain to focus on their algorithmic analysis, service guarantees and improvements to geolocation without having to deal with the initial barrier of gathering the data. We believe this data should be a public resource and are building it up with that goal in mind. Exchange programs such as the one we’ve engaged in with Combain help us get closer to the goal of comprehensive coverage while allowing us to do preliminary testing of our data exchange process with privacy-respecting partners.

We’ve got a long way to go to build out our map and all of Mozilla Location Service, and we hope to announce more data exchange agreements like this in the future. You can watch our progress (fair warning – you can lose a lot of time poking at that map!) and contribute to our efforts by enabling geolocation stumbling in the Firefox for Android browser. Let’s map the world!

February Show and Tells

Each week the Cloud Services team hosts a “Show and Tell” where people in the team share the interesting stuff they’ve been doing. This post wraps up months show and tells so that we can share them with everyone.

Jan 28

  • Andy talks about blogging the show and tells
  • Zach showed his work putting avatars in the browser UI
  • Kit showed using Go interfaces to mock network interfaces

Jan 28th recording (29 min, 21 seconds)

Feb 11

  • Ian showed how to super power the permissions on your site via an addon
  • Sam discussed error handling results from usenix.

Feb 11th recording (12 min, 45 seconds)

Unfortunately the next two show and tells were cancelled. Hoping for a more full list in March.

What’s Hawk authentication and how to use it?

At Mozilla, we recently had to implement the Hawk authentication scheme for a number of projects, and we came up creating two libraries to ease integration into Pyramid and Node.js apps.

But maybe you don’t know Hawk?

Hawk is a relatively new technology, crafted by one of the original OAuth specification authors, that intends to replace the 2-legged OAuth authentication scheme using a simpler approach.

It is an authentication scheme for HTTP, built around HMAC digests of requests and responses.

Every authenticated client request has an Authorization header containing a MAC (Message Authentication Code) and some additional metadata, then each server response to authenticated requests contains a Server-Authorization header thatauthenticates the response, so the client is sure it comes from the right server.

Exchange of the hawk id and hawk key

To sign the requests, a client needs to retrieve a token id and a token key from the server.

The excellent team behind Firefox Accounts put together a scheme to do that, if you are not interested in the details, jump directly to the next section.

When your server application needs to send you the credentials, it will return it inside a specific Hawk-Session-Token header. This token can be derived to split this string in two values (hawk id and hawk key) that you will use to sign your next requests.

In order to get the hawk credentials, you’ll need to:

First, do an HKDF derivation on the given session token. You’ll need to use the following parameters:

key_material = HKDF(hawk_session, "", '', 32*2)

The is a reference to this way of
deriving the credentials, not an actual URL.

Then, the key material you’ll get out of the HKDF need to be separated into two parts, the first 32 hex caracters are the hawk id, and the next 32 ones are the hawk key.


To showcase APIs in the documentation, we like to use HTTPie, a curl-replacement with a nicer
API, built around the python requests library.

Luckily, HTTPie allows you to plug different authentication schemes for it, so we created a wrapper around mohawk to add hawk support to the requests lib.

Doing hawk requests in your terminal is now as simple as:

$ pip install requests-hawk httpie
$ http GET localhost:5000/registration --auth-type=hawk --auth='id:key'

In addition, it helps crafting requests using the requests library:

Alternatively, if you don’t have the token id and key, you can pass the hawk session token presented earlier and the lib will take care of the derivation for you.

Integrate with python pyramid apps

If you’re writing pyramid applications, you’ll be happy to learn that Ryan Kelly put together a library that makes Hawk work as an Authentication provider for them. I’m shocked how simple it is to use it.

Here is a demo of how we implemented it for Daybed, a form validation and data storage API:

The get_hawk_id function is a function that takes a request and a tokenid and returns a tuple of (token_id, token_key).

How you want to store the tokens and retrieve them is up to you. The default implementation (e.g. if you don’t pass a decode_hawk_id function) decodes the key from the token itself, using a master secret on the server (so you don’t need to store anything).

Integrate with node.js Express apps

We had to implement Hawk authentication for two node.js projects and finally came up factorizing everything in a library for express, named express-hawkauth.

In order to plug it in your application, you’ll need to use it as a middleware:

If you pass the createSession parameter, all non-authenticated requests will create a new hawk session and return it with the response, in the Hawk-Session-Token header.

If you want to only check a valid hawk session exists (without creating a new one), just create a middleware which doesn’t have any createSession parameter defined.

Some reference implementations

As a reference, here is how we’re using the libraries I’m talking about, in case that helps you to integrate with your projects.

Transitioning Legacy Sync users to Firefox Accounts

It has been almost a year since the release of Firefox Accounts and the new Firefox Sync service, and the response from users has been very positive.  The simplified setup process has made it easier to get started with the system, to connect new devices, and to recover data if a device is lost — all of which has lead to the new system quickly gathering more active daily users than its predecessor.

During this time we have kept the legacy Sync infrastructure in place and working as usual, so that users who had set up Sync on older versions of Firefox would not be disrupted.  As we begin 2015 with renewed focus on delivering cloud-based services that support the Mozilla mission, it’s time to help transition these users to the new Sync system and to Firefox Accounts.

Users on legacy Sync will be prompted to upgrade to a Firefox Account beginning with Firefox 37, scheduled for release in early April.  There is a guided UI to make the experience as seamless as possible, and once complete the upgrade will be automatically and securely propagated to all connected devices.  Users on older versions of Firefox will see a warning notice and will be able to upgrade manually.

The legacy Sync servers will remain available during this time to help ensure a smooth transition.  We will monitor their ongoing use and decide on a timeline for decommissioning the hardware based on the success of our transition strategy.

We’re looking forward to introducing more users to the improved Sync system, and to rolling out more services for your Firefox Account in 2015.  Don’t hesitate to reach us on or in #sync on IRC with any questions or comments.




  •  When will the legacy Sync servers be switched off?

We expect to decommission this infrastructure before the end of 2015, but no firm date has been set.  This decision will be based on ongoing monitoring of its use and the success of our transition strategy.

  •  What about Extended Support Release (ESR)?

Users on the ESR channel will start seeing the upgrade prompts in Firefox 38, which is scheduled for release in early May.  We are committed to maintaining the legacy sync infrastructure until previous ESR versions reach end-of-life on August 4.

  • Will my data be automatically migrated to the new servers?

No, the strong encryption used by both Sync systems means that we cannot automatically migrate your data on the server.  Once the account upgrade process is complete, Firefox will re-upload your data to the new system (so if you have a lot of bookmarks, you may want to ensure you’re on a reliable network connection).

  •  Are there security considerations when upgrading to the new system?

Both the new and old Sync systems provide industry-leading security for your data: end-to-end encryption of all synced data, using a key known only to you.

In legacy Sync this was achieved by using a complicated pairing flow to transfer the encryption key between devices.  With Firefox Accounts we have replaced this with a key securely derived from your account password.  Pick a strong password and you can remain confident that your synced data will only be seen by you.

  •  Is the new Sync system compatible with Firefox’s master password feature?

Yes.  There was a limitation in previous versions of Firefox that prevented Sync from working when a master password was enabled, but this has since been resolved.  Sync is fully compatible with the master password feature in the latest version of Firefox.

  •  What if I am running a custom or self-hosted Sync server?

This transition affects only the default Mozilla-hosted servers.  If you are using a custom or self-hosted server setup, Sync should continue to work uninterrupted and you will not be prompted to upgrade.

Although we have no plans or timeline for doing so, it’s likely that support for the legacy Sync protocol will be entirely removed from Firefox at some point in the future.  You should consider migrating your server infrastructure to use the new protocols; see below.

  •  Can I self-host the new system?

Yes, either by hosting just the storage servers or by running a full Firefox Accounts stack.  We welcome feedback and contributions on making this process easier.

  •  What if I’m using a different browser (e.g. SeaMonkey, Pale Moon, …)?

You should consider hosting your own server to ensure uninterrupted functionality.

Marketplace migration to Firefox Accounts

In the last year the Firefox Marketplace migrated from Persona to Firefox Accounts. This was done for a couple of reasons, perhaps the biggest being that Firefox OS was switching to Firefox Accounts. On a Firefox OS phone, you will log in once, to the phone and then be automatically logged into the Marketplace.

The migration for the Marketplace had a few unique challenges that some other projects do not have:

  • the Marketplace app supports Firefox OS all the way back to 1.1
  • the Marketplace site supports Android and Desktop clients
  • in the past unverified emails were allowed from Persona
  • Marketplace provides Payments which use the Trusted UI

Firefox Accounts has two different ways to login, a native flow in Firefox OS and an OAuth based flow for Desktop and Android clients. The number of dependencies to complete all that quickly grew out of control, so we focused on the first milestone of ensuring that an OAuth flow worked for all users on all devices.

Migration from a database point of view wasn’t too complicated at first. We store the users email from Persona, all we had to was look at the users email from Firefox Accounts … if a user already existed with that account then they logged in as that account. If not we created a new account.

The unverified emails were a problem to us, because it meant that a user or developer could have an email address that wasn’t routable, accessible or in any way usable. When they migrated to Firefox Accounts an email would be sent to that old email address and they were stuck. We really couldn’t see any around this other than manually verifying accounts as best as possible and moving them over as needed.

For users that already had a Firefox Account through sync and wanted to re-use that on the Marketplace, the Firefox Accounts team added the ability to use a pre-verified token. This allowed a user to start the registration flow as a user, but in the Firefox Accounts sign up use a different email address. At the end of the flow the Marketplace would then detect the difference in the email address, but know which account the flow came from and hook it up correctly.

This then gave birth to a flow chart and after lots of work, but we had a plan:

Screenshot 2015-01-27 11.52.36

Of course the road was not that smooth as the bug list will probably reveal. The biggest difference to any other Firefox Accounts implementors is that the OAuth flow is not the same as the Persona flow. An obvious statement, we thought we had covered until we hit elements like the Trusted UI – at that point it got complicated.

Screenshot 2015-01-30 13.53.23

Once deployed we sent out an email to everyone and waited for the users to come in. Sure enough they did and we saw a large number of transitions within 48 hours.

During this process we made sure emails came out with an email address that could be replied to, I was then able to follow up personally with anyone who had any issues. We also logged login attempts and when we found a bug, I was able to email those people while we fixed and deployed the bug – there were only two people who had a problem at that step, but it felt good to be able to help them.

Firefox Accounts has now been active for over 3 months and this week we turned off the special transition flow and deleted the transition flow as felt enough users and developers had migrated.

What’s next?

  • Getting the native flow deployed for Firefox OS 2.1
  • Moving to an iframe based flow and removing the popup based flow
  • Deeper integration with upcoming Firefox Accounts features, like avatars

Big thanks to the Marketplace team who implemented this and the Firefox Accounts team who did an awful lot of work helping us out.