Stolen Passwords Used to Break into Firefox Accounts

We recently discovered a pattern of suspicious logins to Firefox Accounts. It appears that an attacker with access to passwords from data breaches at other websites has been attempting to use those passwords to log into users’ Firefox Accounts. In some cases where a user reused their Firefox Accounts password on another website and that website was breached, the attacker was able to take the password from the breach and use it to log into the user’s Firefox Account.

We’ve already taken steps to protect our users. We automatically reset the passwords of users whose accounts were broken into. We also notified these users with instructions on how to regain access to their accounts and further protect themselves going forward. This investigation is ongoing and we will notify users if we discover unauthorized activity on their account.

User security is paramount to us. It is part of our mission to help build an Internet that truly puts people first and where individuals are empowered, safe and independent. Events like this are a good reminder of the importance of good password hygiene. Using strong and different passwords across websites is important for staying safe online. We’re also working on additional rate-limiting and other security mechanisms to provide additional protection for our users.

Using VAPID with WebPush

This post continues discussion about using the evolving WebPush feature.

One of the problems with offering a service that doesn’t require identification is that it’s hard to know who’s responsible for something. For instance, if a consumer is having problems, or not using the service correctly, it is a challenge to contact them. One option is to require strong identification to use the service, but there are plenty of reasons to not do that, notably privacy.

The answer is to have each publisher optionally identify themselves, but how do we prevent everyone from saying that they’re something popular like “CatFacts”? The Voluntary Application Server Identification for Web Push (VAPID) protocol was drafted to try and answer that question.

Making a claim

VAPID uses JSON Web Tokens (JWT) to carry identifying information. The core of the VAPID transaction is called a “claim”. A claim is a JSON object containing several common fields. It’s best to explain using an example, so let’s create a claim from a fictional CatFacts service.
{
"aud": "http://catfacts.example.com",
"exp": 1458679343,
"sub": "mailto:webpush_ops@catfacts.example.com"
}

aud
The “audience” is the origin URL of the sender
exp
The “expiration” date is the UTC time in seconds when the claim should expire. This should not be longer than 24 hours from the time the request is made. For instance, in Javascript you can use: Math.floor(Date.now() * .001 + 86400).
sub
The “subscriber” is the primary contact email for this subscription. You’ll note that for this, we’ve used a generic email alias rather than a specific person. This approach allows multiple people to be alerted, or assigning a new person without having to change code.

I’ve added spaces and new lines to make things more readable. JWT objects normally strip those out.

Signing and Sealing

A JWT object actually has three parts: a standard header, the claim (which we just built), and a signature.

The header is very simple and is standard to any VAPID JWT object.

{"typ": "JWT","alg":"ES256"}

If you’re curious, typ is the “type” of object (a “JWT”), and alg is the signing algorithm to use. In our case, we’re using Elliptic Curve Cryptography based on the NIST P-256 curve (or “ES256”).

We’ve already discussed what goes in the claim, so now, there’s just the signature. This is where things get complicated.

Here’s code to sign the claim using Python 2.7.


# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.

import base64
import time
import json

import ecdsa
from jose import jws


def make_jwt(header, claims, key):
    vk = key.get_verifying_key()
    jwt = jws.sign(
        claims,
        key,
        algorithm=header.get("alg", "ES256")).strip("=")
    # The "0x04" octet indicates that the key is in the
    # uncompressed form. This form is required by the
    # server and DOM API. Other crypto libraries
    # may prepend this prefix automatically.
    raw_public_key = "\x04" + vk.to_string()
    public_key = base64.urlsafe_b64encode(raw_public_key).strip("=")
    return (jwt, public_key)


def main():
    # This is a standard header for all VAPID objects:
    header = {"typ": "JWT", "alg": "ES256"}

    # These are our customized claims.
    claims = {"aud": "https://catfacts.example.com",
              "exp": int(time.time()) + 86400,
              "sub": "mailto:webpush_ops@catfacts.example.com"}

    my_key = ecdsa.SigningKey.generate(curve=ecdsa.NIST256p)
    # You can store the private key by writing
    #   my_key.to_pem() to a file.
    # You can reload the private key by reading
    #   my_key.from_pem(file_contents)

    (jwt, public_key) = make_jwt(header, claims, my_key)

    # Return the headers we'll use.
    headers = {
        "Authorization": "Bearer %s" % jwt,
        "Crypto-Key": "p256ecdsa=%s" % public_key,
    }

    print json.dumps(headers, sort_keys=True, indent=4)


main()

There’s a little bit of cheating here in that I’m using the “python ecdsa” library and JOSE‘s jws library, but there are similar libraries for other languages. The important bit is that a key pair is created.

This key pair should be safely retained for the life of the subscription. In most cases, just the private key can be retained since the public key portion can be easily derived from it. You may want to save both private and public keys since we’re working on a dashboard that will use your public key to let you see info about your feeds.

The output of the above script looks like:

{
    "Authorization": "Bearer eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJodHRwczovL2NhdGZhY3RzLmV4YW1wbGUuY29tIiwiZXhwIjoxNDU4Njc5MzQzLCJzdWIiOiJtYWlsdG86d2VicHVzaF9vcHNAY2F0ZmFjdHMuZXhhbXBsZS5jb20ifQ.U8MYqcQcwFcK2UkeiISahgZFvaOw56ZQvHYZc4zXC2Ed48-lk3MoYExGagKLwr4lSdbARZEbblAprQfXlap3jw",
    "Crypto-Key": "p256ecdsa=EJwJZq_GN8jJbo1GGpyU70hmP2hbWAUpQFKDByKB81yldJ9GTklBM5xqEwuPM7VuQcyiLDhvovthPIXx-gsQRQ=="
}

These are the HTTP request headers that you should include in the POST request when you send a message. VAPID uses these headers to identify a subscription.

The “Crypto-Key” header may contain many sub-components, separated by a semi-colon (“;”). You can insert the “p256ecdsa” value, which contains the public key, anywhere in that list. This header is also used to relay the encryption key, if you’re sending a push message with data. The JWT is relayed in the “Authorization” header as a “Bearer” token. The server will use the pubic key to check the signature of the JWT and ensure that it’s correct.

Again, VAPID is purely optional. You don’t need it if you want to send messages. Including VAPID information will let us contact you if we see a problem. It will also be used for upcoming features such as restricted subscriptions, which will help minimize issues if the endpoint is ever lost, and the developer dashboard, which will provide you with information about your subscription and some other benefits. We’ll discuss those more when the features becomes available. We’ve also published a few tools that may help you understand and use VAPID. The Web Push Data Test Page (GitHub Repo) can help library authors develop and debug their code by presenting “known good” values. The VAPID verification page (GitHub Repo) is a simpler, “stand alone” version that can test and generate values.

As always, your input is welcome.

Updated to spell out what VAPID stands for.

TTL Header Requirement Relaxed for WebPush

A few weeks ago, we rolled out a version of the WebPush server that required a HTTP TTL header to be included with each subscription update. A number of publishers reported issues with the sudden change, and we regret not properly notifying all parties.

We’re taking steps to solve that problem, including posting here and on Twitter.

To help make it easier for publishers and programmers to experiment with WebPush, we have temporarily relaxed the TTL requirement. If a recipient’s computer is actively connected, the server will accept and deliver a WebPush notification submitted without a HTTP TTL header. This is similar to the older behavior where updates would be accepted with a presumed TTL value of 0 (zero), because the server was able to immediately deliver the notification. If the recipient is not actively connected and the TTL header is missing, the server will return a HTTP status of 400 and the following body:

{“status”: 400, “errno”: 111, “message”: “Missing TTL Header, see: https://webpush-wg.github.io/webpush-protocol/#rfc.section.6.2”}

It’s our hope that this error is descriptive enough to help programmers understand what the error is and how to best resolve it. Of course, we’re always open to suggestions on how we can improve this.

Also, please note that we consider this a temporary fix. We prefer to stay as close to the standard as possible, and will eventually require a TTL header for all WebPush notifications. We’re keeping an eye on our error logs and once we see the number of incorrect calls fall below a certain percentage of the overall calls, we’ll announce the end of life of this temporary fix.

WebPush’s New Requirement: TTL Header

WebPush is a new service where web applications can receive notification messages from servers. WebPush is available in Firefox 45 and later and will be available in Firefox for Android soon. Since it’s a new technology, the WebPush specification continues to evolve. We’ve been rolling out the new service and we saw that many updates were not reaching their intended audience.

Each WebPush message has a TTL (Time To Live), which is the number of seconds that a message may be stored if the user is not immediately available. This value is specified as a TTL: header provided as part of the WebPush message sent to the push server. The original draft of specification stated that if the header is missing, the default TTL is zero (0) seconds. This means if the TTL header was omitted, and the corresponding recipient was not actively connected, the message was immediately discarded. This was probably not obvious to senders since the push server would return a 201 Success status code.

Immediately discarding the message if the user is offline is probably not what many developers expect to happen. The working group decided that it was better for the sender to explicitly state the length of time that the message should live. The Push Service may still limit this TTL to it’s own maximum. In any case, the Push Service server would return the actual TTL in the POST response.

You can still specify a TTL of zero, but it will be you setting it explicitly, rather than the server setting it implicitly. Likewise if you were to specify TTL: 10000000, and the Push Service only supports a maximum TTL of 5,184,000 seconds (about one month), then the Push Service would respond with a TTL:5184000

As an example,


curl -v -X POST https://updates.push.services.mozilla.com/push/LongStringOfStuff \
-H "encryption-key: keyid=p256dh;dh=..." \
-H "encryption: keyid=p256dh;salt=..." \
-H "content-encoding: aesgcm128" \
-H "TTL: 60" \
--data-binary @encrypted.data

> POST /push/LongStringOfStuff HTTP/1.1
> User-Agent: curl/7.35.0
> Host: updates.push.services.mozilla.com
> Accept: */*
> encryption-key: ...
> encryption: ...
> content-encoding: aesgcm128
> TTL: 60    
> Content-Length: 36
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 36 out of 36 bytes
< HTTP/1.1 201 Created
< Access-Control-Allow-Headers: content-encoding,encryption,...
< Access-Control-Allow-Methods: POST,PUT
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: location,www-authenticate
< Content-Type: text/html; charset=UTF-8
< Date: Thu, 18 Feb 2016 20:33:55 GMT
< Location: https://updates.push.services.mozilla.com...
< TTL: 60
< Content-Length: 0
< Connection: keep-alive

In this example, the message would be held for up to 60 seconds before either the recipient reconnected, or the message was discarded.

If you fail to include a TTL header, the server will respond with an HTTP status code of 400. The result will be similar to:


< HTTP/1.1 400 Bad Request
< Access-Control-Allow-Headers: content-encoding,encryption,...
< Access-Control-Allow-Methods: POST,PUT
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: location,www-authenticate
< Content-Type: application/json
< Date: Fri, 19 Feb 2016 00:46:43 GMT
< Content-Length: 84
< Connection: keep-alive
<
{"errno": 111, "message": "Missing TTL header", "code": 400, "error": "Bad Request"}

The returned error will contain a JSON block that describes what went wrong. Refer to our list of error codes for more detail.

We understand that the change to require the TTL header may have not reached everyone, and we apologize about that. We’re going to be “softening” the requirement soon. The server will return a 400 only if the remote client is not immediately connected, otherwise we will accept the WebPush with the usual 201. Please understand that this relaxation of the spec is temporary and we will return to full specification compliance in the near future.

We’re starting up a Twitter account, @MozillaWebPush, where we’ll post announcements, status, and other important information related to our implementation of WebPush. We encourage you to follow that account.

Shutting down the legacy Sync service

In response to strong user uptake of Mozilla’s new Sync service powered by Firefox Accounts, earlier this year we announced a plan to transition users off of our legacy Sync infrastructure and onto the new product.  With this migration now well under way, it is time to settle the details of a graceful end-of-life for the old service.

We will shut down the legacy Sync service on September 30th 2015.

We encourage all users of the old service to upgrade to a Firefox Account, which offers a simplified setup process, improved availability and reliability, and the possibility of recovering your data even if you lose all of your devices.

Users on Firefox 37 or later are currently being offered a guided migration process to make the experience as seamless as possible.  Users on older versions of Firefox will see a warning notice and will be able to upgrade manually.  Users running their own Sync server, or using a Sync service hosted by someone other than Mozilla, will not be affected by this change.

Update: shutdown of the legacy Sync service has been completed.   Users who are yet to migrate off the service will be offered the guided upgrade experience until Firefox 44.  Firefox 44 and later will automatically and silently disconnect from legacy Sync.

We are committed to making this transition as smooth as possible for Firefox users.  If you have any questions, comments or concerns, don’t hesitate to reach out to us on sync-dev@mozilla.org or in #sync on Mozilla IRC.

 

FAQ

 

  • What will happen on September 30th 2015?

After September 30th, we will decommission the hardware hosting the legacy Sync service and discard all data stored therein.  The corresponding DNS names will be redirected to static error pages, to ensure that appropriate messaging is provided for users who have yet to upgrade to the new service.

  • What’s the hurry? Can’t you just leave it running in maintenance mode?

Unfortunately not.  While we want to ensure as little disruption as possible for our users, the legacy Sync service is hosted on aging hardware in a physical data-center and incurs significant operational costs.  Maintaining the service beyond September 30th would be prohibitively expensive for Mozilla.

  • What about Extended Support Release (ESR)?

Users on the ESR channel have support for Firefox Accounts and the new Sync service as of Firefox 38.  Previous ESR versions reach end-of-life in early August and we encourage all users to upgrade to the latest version.

  • Will my data be automatically migrated to the new servers?

No, the strong encryption used by both Sync systems means that we cannot automatically migrate your data on the server.  Once you complete your account upgrade, Firefox will re-upload your data to the new system (so if you have a lot of bookmarks, you may want to ensure you’re on a reliable network connection).

  • Are there security considerations when upgrading to the new system?

Both the new and old Sync systems provide industry-leading security for your data: client-side end-to-end encryption of all synced data, using a key known only to you.

In legacy Sync this was achieved by using a complicated pairing flow to transfer the encryption key between devices.  With Firefox Accounts we have replaced this with a key securely derived from your account password.  Pick a strong password and you can remain confident that your synced data will only be seen by you.

  • Does Mozilla use my synced data to market to me, or sell this data to third parties?

No.  Our handling of your data is governed by Mozilla’s privacy policy which does not allow such use.  In addition, the strong encryption provided by Sync means that we cannot use your synced data for such purposes, even if we wanted to.

  • Is the new Sync system compatible with Firefox’s master password feature?

Yes.  There was a limitation in previous versions of Firefox that prevented Sync from working when a master password was enabled, but this has since been resolved.  Sync is fully compatible with the master password feature in the latest version of Firefox.

  • What if I am running a custom or self-hosted Sync server?

This transition affects only the default Mozilla-hosted servers.  If you are using a custom or self-hosted server setup, Sync should continue to work uninterrupted and you will not be prompted to upgrade.

However, the legacy Sync protocol code inside Firefox is no longer maintained, and we plan to begin its removal in 2016.  You should consider migrating your server infrastructure to use the new protocols; see below.

  • Can I self-host the new system?

Yes, either by hosting just the storage servers or by running a full Firefox Accounts stack.  We welcome feedback and contributions on making this process easier.

  • What if I’m using a different browser (e.g. SeaMonkey, Pale Moon, …)?

Your browser vendor may already provide alternate hosting.  If not, you should consider hosting your own server to ensure uninterrupted functionality.

The changing face of software quality

TL;DR

QA as a function in software is and has been changing. It is less about validating changes and writing test plans and more about pushing quality forward through tools, automation, and process refinement. We must work to improve the QA phase (everything between “dev complete” and “ready for deployment”), and quality throughout the whole life-cycle. In doing so QA becomes a facilitator, an ambassador, a shepherd of code, providing simple, confident, painless ways of getting things out the door. We may become reliability engineers, production engineers, tools engineers, infrastructure engineers, and/or release engineers. In smaller teams you may be many of these things, in bigger ones they may be distinct roles. The line between QA and Ops may blur, as might the line between Dev and QA. Manual QA still plays a large role, but it is in addition to, not the standard. It is another tool in the toolbox.

What is QA and Software Quality really?

In my mind the purpose of QA is not merely to find bugs or validate changes before release, but to ensure that the product that our users receive is of high quality. That can mean many different things, and the number of bugs is definitely a part of it, but I think that is a simplification of the definition of software quality. QA is changing in many companies these days, and we’re making a big push this year in Mozilla’s Cloud Services team to redefine what QA and quality means.

Quality software, to me, is about happy users. It is users who love and care about your product, users who will evangelize your product, and users that contribute feedback to make your product better. If you have those users, then you have a great product.

As QA we don’t write code, we don’t run the servers, we don’t decide what to build nor how it should look, but we do make sure that all of those things are up to the standards that we all set for ourselves and our products. If our goal in software is to make our users happy then giving them a quality experience that solves a problem for them and makes their lives easier is how we achieve that goal. This is what software is all about, and is something that everyone strives for. Quality really is everyone’s concern. It should not be an afterthought and it shouldn’t fall solely on QA. It is a mindset that needs to be ingrained in engineers, product managers, designers, managers, and operations. It is a part of company culture.

Aspects of Quality

When talking about quality it helps to have a clear idea what we mean when we say software should be of high quality.

  • Performance
    • load times
    • animation/ui smoothness
    • responsiveness to interactions
  • Stability
    • consistency, is it the same experience every time I use the product
    • limited downtime
  • Functionality
    • does it do what we said it would
    • does it do so ‘correctly’ (limited bugs)
    • does it solve a problem
  • Usability
    • is it simple, attractive, unobtrusive
    • does it frustrate or annoy
    • does it make sense
    • does it do what the user wants and expects

These aspects involve everyone. You can lobby for other aspects that you find important, and please do, this list is by no means exhaustive.

Great, so we’re thinking about ways that we can deliver quality software beyond test runs and checklists, and what it means to do so, but there’s a bit of an issue.

The Grand Quality Renaissance

A lot of what we do in traditional QA is 1:1 interactions with projects when there are changes ready to be tested for release. We check functionality, we check changes, we check for regression, we check for performance and load. This is ‘fine’, by most standards. It gets the job done, the product is validated, and it goes for release. All is good… Except that it doesn’t scale, the work you do doesn’t provide ongoing value to the team, or the company. As soon as you complete your testing that work is ‘lost’ and then you start from scratch with the next release.

In Cloud Services we have upwards of 20 projects that are somewhere between maintenance mode, active, or in planning. We simply don’t have the head count (5 currently) to manage all of those projects simultaneously, and even if we did, plans change. We have to react to the ongoing changes in the team, the company, and the world (well, at least as it relates to our products). Sometimes we won’t be able to react because 1:1 interactions aren’t flexible enough. Okay, simple answer: lets grow the team. That may work at times, but it also takes a lot of time, a lot of money, and even worse you might put yourself in a position where you have the head count to manage 20 projects, but suddenly there’s only 10 projects that we care about going forward and then we have to let some people go. That’s not fair to anyone – we need a better solution.

What we’re talking about, really, is how can we do more with less, how can we be more effective with the people and resources that we have. We need to do work that is effective and applicable to either:

  • many projects at once (which can be hard to do for tests, unless there is a lot of crossover, but possible with things like infrastructure and build tools), or
  • can be reused by the same project each time a change comes up for release (much more realistic for testing).

Once we have that work in place, then we know that if we grow the team it is deliberate, not reactionary. Being in control is good!

That doesn’t really mean anything concrete, so let’s put some goals in place give us something to work towards.

  • Improve 1:1 Person:Project effectiveness, scale without sacrificing quality
  • Implement quality measurement tooling and practices to track changes and improvements
  • Facilitate team scaling by producing work of ongoing value via automated tests and test infrastructure
  • Reduce deployment time/pain by automating build and release pipeline
  • Reduce communication overhead and back and forth between teams by shorten feedback loops
  • Increase confidence in releases through stability and results from our build/tools/test/infrastructure work
  • Reframe what quality means in cloud services through leadership and driving forward change

Achieving those goals means we as QA are more effective, but the side effects to the entire organization are numerous. We’ll see more transparency between teams, developers will receive feedback faster and be more confident in their work, they take control of their own builds and test runs removing back and forth communication delays, operations can one-click release to production with fewer concerns about late nights and rollbacks, more focus can be applied to upholding the aspects of quality by all rather than fighting fires, and ideally productivity, quality, and happiness go up (how do we gather some telemetry on that?).

We’re putting together the concrete plan for what this will look like at Mozilla’s Cloud Services. It will take a while to achieve these goals, and there will undoubtedly be issues along the way, but the end result is much more stable, reliable, and consistent. That last one is particularly important. If we’re consistent then we can measure ourselves, which leads to us being predictable. Being predictable, statistically speaking, means our decision making is much more informed. At a higher level, our focus shifts to making the best product we can, delivering it quickly, and iterating on feedback. We learn from our users and integrate their feedback into our products, without builds or bugs or tools or process getting in the way. We make users happy. When it comes down to it, that’s really what this is all about – that’s the utopia. Let’s build our utopia!

Combain Deal Helps Expand Mozilla Location Service

Last week, we signed an agreement with Combain AB, a Swedish company dedicated to accurate global positioning through wireless and cell tower signals.

The agreement lets Mozilla use Combain’s service as a fallback for Mozilla Location Service. Additionally, we’re exchanging data from our stumbling efforts to improve the overall quality of our databases.

We’re excited about both parts of this deal.

  • Having the ability to fall back to another provider in situations where we feel that our data set is too sparse to provide an acceptable location gives Mozilla Location Service users some extra confidence in the values provided by the service. We have also extended the terms of our deal with Google to use the Google Location Service on Firefox products, giving us excellent location tools across our product line.
  • Exchanging data helps us build out our database. Ultimately, we want this database to be available to the community, allowing companies such as Combain to focus on their algorithmic analysis, service guarantees and improvements to geolocation without having to deal with the initial barrier of gathering the data. We believe this data should be a public resource and are building it up with that goal in mind. Exchange programs such as the one we’ve engaged in with Combain help us get closer to the goal of comprehensive coverage while allowing us to do preliminary testing of our data exchange process with privacy-respecting partners.

We’ve got a long way to go to build out our map and all of Mozilla Location Service, and we hope to announce more data exchange agreements like this in the future. You can watch our progress (fair warning – you can lose a lot of time poking at that map!) and contribute to our efforts by enabling geolocation stumbling in the Firefox for Android browser. Let’s map the world!

February Show and Tells

Each week the Cloud Services team hosts a “Show and Tell” where people in the team share the interesting stuff they’ve been doing. This post wraps up months show and tells so that we can share them with everyone.

Jan 28

  • Andy talks about blogging the show and tells
  • Zach showed his work putting avatars in the browser UI
  • Kit showed using Go interfaces to mock network interfaces

Jan 28th recording (29 min, 21 seconds)

Feb 11

  • Ian showed how to super power the permissions on your site via an addon
  • Sam discussed error handling results from usenix.

Feb 11th recording (12 min, 45 seconds)

Unfortunately the next two show and tells were cancelled. Hoping for a more full list in March.

What’s Hawk authentication and how to use it?

At Mozilla, we recently had to implement the Hawk authentication scheme for a number of projects, and we came up creating two libraries to ease integration into Pyramid and Node.js apps.

But maybe you don’t know Hawk?

Hawk is a relatively new technology, crafted by one of the original OAuth specification authors, that intends to replace the 2-legged OAuth authentication scheme using a simpler approach.

It is an authentication scheme for HTTP, built around HMAC digests of requests and responses.

Every authenticated client request has an Authorization header containing a MAC (Message Authentication Code) and some additional metadata, then each server response to authenticated requests contains a Server-Authorization header thatauthenticates the response, so the client is sure it comes from the right server.

Exchange of the hawk id and hawk key

To sign the requests, a client needs to retrieve a token id and a token key from the server.

The excellent team behind Firefox Accounts put together a scheme to do that, if you are not interested in the details, jump directly to the next section.

When your server application needs to send you the credentials, it will return it inside a specific Hawk-Session-Token header. This token can be derived to split this string in two values (hawk id and hawk key) that you will use to sign your next requests.

In order to get the hawk credentials, you’ll need to:

First, do an HKDF derivation on the given session token. You’ll need to use the following parameters:

key_material = HKDF(hawk_session, "", 'identity.mozilla.com/picl/v1/sessionToken', 32*2)

The identity.mozilla.com/picl/v1/sessionToken is a reference to this way of
deriving the credentials, not an actual URL.

Then, the key material you’ll get out of the HKDF need to be separated into two parts, the first 32 hex caracters are the hawk id, and the next 32 ones are the hawk key.

HTTPie

To showcase APIs in the documentation, we like to use HTTPie, a curl-replacement with a nicer
API, built around the python requests library.

Luckily, HTTPie allows you to plug different authentication schemes for it, so we created a wrapper around mohawk to add hawk support to the requests lib.

Doing hawk requests in your terminal is now as simple as:

$ pip install requests-hawk httpie
$ http GET localhost:5000/registration --auth-type=hawk --auth='id:key'

In addition, it helps crafting requests using the requests library:

Alternatively, if you don’t have the token id and key, you can pass the hawk session token presented earlier and the lib will take care of the derivation for you.

Integrate with python pyramid apps

If you’re writing pyramid applications, you’ll be happy to learn that Ryan Kelly put together a library that makes Hawk work as an Authentication provider for them. I’m shocked how simple it is to use it.

Here is a demo of how we implemented it for Daybed, a form validation and data storage API:

The get_hawk_id function is a function that takes a request and a tokenid and returns a tuple of (token_id, token_key).

How you want to store the tokens and retrieve them is up to you. The default implementation (e.g. if you don’t pass a decode_hawk_id function) decodes the key from the token itself, using a master secret on the server (so you don’t need to store anything).

Integrate with node.js Express apps

We had to implement Hawk authentication for two node.js projects and finally came up factorizing everything in a library for express, named express-hawkauth.

In order to plug it in your application, you’ll need to use it as a middleware:

If you pass the createSession parameter, all non-authenticated requests will create a new hawk session and return it with the response, in the Hawk-Session-Token header.

If you want to only check a valid hawk session exists (without creating a new one), just create a middleware which doesn’t have any createSession parameter defined.

Some reference implementations

As a reference, here is how we’re using the libraries I’m talking about, in case that helps you to integrate with your projects.

Transitioning Legacy Sync users to Firefox Accounts

It has been almost a year since the release of Firefox Accounts and the new Firefox Sync service, and the response from users has been very positive.  The simplified setup process has made it easier to get started with the system, to connect new devices, and to recover data if a device is lost — all of which has lead to the new system quickly gathering more active daily users than its predecessor.

During this time we have kept the legacy Sync infrastructure in place and working as usual, so that users who had set up Sync on older versions of Firefox would not be disrupted.  As we begin 2015 with renewed focus on delivering cloud-based services that support the Mozilla mission, it’s time to help transition these users to the new Sync system and to Firefox Accounts.

Users on legacy Sync will be prompted to upgrade to a Firefox Account beginning with Firefox 37, scheduled for release in early April.  There is a guided UI to make the experience as seamless as possible, and once complete the upgrade will be automatically and securely propagated to all connected devices.  Users on older versions of Firefox will see a warning notice and will be able to upgrade manually.

The legacy Sync servers will remain available during this time to help ensure a smooth transition.  We will monitor their ongoing use and decide on a timeline for decommissioning the hardware based on the success of our transition strategy.

We’re looking forward to introducing more users to the improved Sync system, and to rolling out more services for your Firefox Account in 2015.  Don’t hesitate to reach us on sync-dev@mozilla.org or in #sync on IRC with any questions or comments.

 

FAQ

 

  •  When will the legacy Sync servers be switched off?

We expect to decommission this infrastructure before the end of 2015, but no firm date has been set.  This decision will be based on ongoing monitoring of its use and the success of our transition strategy.

  •  What about Extended Support Release (ESR)?

Users on the ESR channel will start seeing the upgrade prompts in Firefox 38, which is scheduled for release in early May.  We are committed to maintaining the legacy sync infrastructure until previous ESR versions reach end-of-life on August 4.

  • Will my data be automatically migrated to the new servers?

No, the strong encryption used by both Sync systems means that we cannot automatically migrate your data on the server.  Once the account upgrade process is complete, Firefox will re-upload your data to the new system (so if you have a lot of bookmarks, you may want to ensure you’re on a reliable network connection).

  •  Are there security considerations when upgrading to the new system?

Both the new and old Sync systems provide industry-leading security for your data: end-to-end encryption of all synced data, using a key known only to you.

In legacy Sync this was achieved by using a complicated pairing flow to transfer the encryption key between devices.  With Firefox Accounts we have replaced this with a key securely derived from your account password.  Pick a strong password and you can remain confident that your synced data will only be seen by you.

  •  Is the new Sync system compatible with Firefox’s master password feature?

Yes.  There was a limitation in previous versions of Firefox that prevented Sync from working when a master password was enabled, but this has since been resolved.  Sync is fully compatible with the master password feature in the latest version of Firefox.

  •  What if I am running a custom or self-hosted Sync server?

This transition affects only the default Mozilla-hosted servers.  If you are using a custom or self-hosted server setup, Sync should continue to work uninterrupted and you will not be prompted to upgrade.

Although we have no plans or timeline for doing so, it’s likely that support for the legacy Sync protocol will be entirely removed from Firefox at some point in the future.  You should consider migrating your server infrastructure to use the new protocols; see below.

  •  Can I self-host the new system?

Yes, either by hosting just the storage servers or by running a full Firefox Accounts stack.  We welcome feedback and contributions on making this process easier.

  •  What if I’m using a different browser (e.g. SeaMonkey, Pale Moon, …)?

You should consider hosting your own server to ensure uninterrupted functionality.