Sending VAPID identified WebPush Notifications via Mozilla’s Push Service

Introduction

The Web Push API provides the ability to deliver real time events (including data) from application servers (app servers) to their client-side counterparts (applications), without any interaction from the user. In other parts of our Push documentation we provide a general reference for the application API and a basic client usage tutorial. This document addresses the app server side portion in detail, including integrating VAPID and Push message encryption into your server effectively, and how to avoid common issues.

Update: There have been several updates since this document was originally published. Where appropriate, I’ve corrected things to reflect state as of April 25th, 2017.
Note: Much of this document presumes you’re familiar with programming as well as have done some light work in cryptography. Unfortunately, since this is new technology, there aren’t many libraries available that make sending messages painless and easy. As new libraries come out, we’ll add pointers to them, but for now, we’re going to spend time talking about how to do the encryption so that folks who need it, or want to build those libraries can understand enough to be productive.

Bear in mind that Push is not meant to replace richer messaging technologies like Google Cloud Messaging (GCM), Apple Push Notification system (APNs), or Microsoft’s Windows Notification System (WNS). Each has their benefits and costs, and it’s up to you as developers or architects to determine which system solves your particular set of problems. Push is simply a low cost, easy means to send data to your application.

Push Summary


The Push system looks like:
A diagram of the push process flow

Application The user facing part of the program that interacts with the browser in order to request a Push Subscription, and receive Subscription Updates.
Application Server The back-end service that generates Subscription Updates for delivery across the Push Server.
Push The system responsible for delivery of events from the Application Server to the Application.
Push Server The server that handles the events and delivers them to the correct Subscriber. Each browser vendor has their own Push Server to handle subscription management. For instance, Mozilla uses autopush.
Subscription A user request for timely information about a given topic or interest, which involves the creation of an Endpoint to deliver Subscription Updates to. Sometimes also referred to as a “channel”.
Endpoint A specific URL that can be used to send a Push Message to a specific Subscriber.
Subscriber The Application that subscribes to Push in order to receive updates, or the user who instructs the Application to subscribe to Push, e.g. by clicking a “Subscribe” button.
Subscription Update An event sent to Push that results in a Push Message being received from the Push Server.
Push Message A message sent from the Application Server to the Application, via a Push Server. This message can contain a data payload.

The main parts that are important to Push from a server-side perspective are as follows we’ll cover all of these points below in detail:

Identifying Yourself


Mozilla goes to great lengths to respect privacy, but sometimes, identifying your feed can be useful.

Mozilla offers the ability for you to identify your feed content, which is done using the Voluntary Application server Identification for web Push VAPID specification. This is a set of header values you pass with every subscription update. One value is a VAPID key that validates your VAPID claim, and the other is the VAPID claim itself a set of metadata describing and defining the current subscription and where it has originated from.

VAPID is only useful between your servers and our push servers. If we notice something unusual about your feed, VAPID gives us a way to contact you so that things can go back to running smoothly. In the future, VAPID may also offer additional benefits like reports about your feeds, automated debugging help, or other features.

In short, VAPID is a bit of JSON that contains an email address to contact you, an optional URL that’s meaningful about the subscription, and a timestamp. I’ll talk about the timestamp later, but really, think of VAPID as the stuff you’d want us to have to help you figure out something went wrong.

It may be that you only send one feed, and just need a way for us to tell you if there’s a problem. It may be that you have several feeds you’re handling for customers of your own, and it’d be useful to know if maybe there’s a problem with one of them.

Generating your VAPID key

The easiest way to do this is to use an existing library for your language. VAPID is a new specification, so not all languages may have existing libraries.
Currently, we’ve collected several libraries under https://github.com/web-push-libs/vapid and are very happy to learn about more.

Fortunately, the method to generate a key is fairly easy, so you could implement your own library without too much trouble

The first requirement is an Elliptic Curve Diffie Hellman (ECDH) library capable of working with Prime 256v1 (also known as “p256” or similar) keys. For many systems, the OpenSSL package provides this feature. OpenSSL is available for many systems. You should check that your version supports ECDH and Prime 256v1. If not, you may need to download, compile and link the library yourself.

At this point you should generate a EC key for your VAPID identification. Please remember that you should NEVER reuse the VAPID key for the data encryption key you’ll need later. To generate a ECDH key using openssl, enter the following command in your Terminal:

openssl ecparam -name prime256v1 -genkey -noout -out vapid_private.pem

This will create an EC private key and write it into vapid_private.pem. It is important to safeguard this private key. While you can always generate a replacement key that will work, Push (or any other service that uses VAPID) will recognize the different key as a completely different user.

You’ll need to send the Public key as one of the headers . This can be extracted from the private key with the following terminal command:

openssl ec -in vapid_private.pem -pubout -out vapid_public.pem

Creating your VAPID claim

VAPID uses JWT to contain a set of information (or “claims”) that describe the sender of the data. JWTs (or Javascript Web Tokens) are a pair of JSON objects, turned into base64 strings, and signed with the private ECDH key you just made. A JWT element contains three parts separated by “.”, and may look like:

eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA

  1. The first element is a “header” describing the JWT object. This JWT header is always the same the static string {typ:"JWT",alg:"ES256"} which is URL safe base64 encoded to eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9. For VAPID, this string should always be the same value.
  2. The second element is a JSON dictionary containing a set of claims. For our example, we’ll use the following claims:
    {
     "sub": "mailto:admin@example.com",
     "aud": "https://push.services.mozilla.com",
     "exp": "1463001340"
    }

    The required claims are as follows:

    1. sub : The “Subscriber” a mailto link for the administrative contact for this feed. It’s best if this email is not a personal email address, but rather a group email so that if a person leaves an organization, is unavailable for an extended period, or otherwise can’t respond, someone else on the list can. Mozilla will only use this if we notice a problem with your feed and need to contact you.
    2. aud : The “Audience” is a JWT construct that indicates the recipient scheme and host (e.g. for an endpoint like https://updates.push.services.mozilla.com/wpush/v1/gAAAAABY..., the “aud” would be https://updates.push.services.mozilla.com. Some push services will require this field. Please consult your documentation.
    3. exp : “Expires” this is an integer that is the date and time that this VAPID header should remain valid until. It doesn’t reflect how long your VAPID signature key should be valid, just this specific update. Normally this value is fairly short, usually the current UTC time + no more than 24 hours. A long lived “VAPID” header does introduce a potential “replay” attack risk, since the VAPID headers could be reused for a different subscription update with potentially different content.

     

    Feel free to add additional items to your claims. This info really should be the sort of things you want to get at 3AM when your server starts acting funny. For instance, you may run many AWS S3 instances, and one might be acting up. It might be a good idea to include the AMI-ID of that instance (e.g. “aws_id”:”i-5caba953″). You might be acting as a proxy for some other customer, so adding a customer ID could be handy. Just remember that you should respect privacy and should use an ID like “abcd-12345” rather than “Mr. Johnson’s Embarrassing Bodily Function Assistance Service”. Just remember to keep the data fairly short so that there aren’t problems with intermediate services rejecting it because the headers are too big.

Once you’ve composed your claims, you need to convert them to a JSON formatted string, ideally with no padding space between elements1, for example:

{"sub":"mailto:admin@example.com","aud":"https://push.services.mozilla.com","exp":"1463087677"}

Then convert this string to a URL-safe base64-encoded string, with the padding ‘=’ removed. For example, if we were to use python:


   import base64
   import json
   # These are the claims
   claims = {"sub":"mailto:admin@example.com",
             "aud": "https://push.services.mozilla.com",
             "exp":"1463087677"}
   # convert the claims to JSON, then encode to base64
   body = base64.urlsafe_b64encode(json.dumps(claims))
   print body

would give us

eyJhdWQiOiJodHRwczovL3B1c2guc2VydmljZXMubW96aWxsYS5jb20iLCJzdWIiOiJtYWlsdG86YWRtaW5AZXhhbXBsZS5jb20iLCJleHAiOiIxNDYzMDAxMzQwIn0

This is the “body” of the JWT base string.

The header and the body are separated with a ‘.’ making the JWT base string.

eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHRwczovL3B1c2guc2VydmljZXMubW96aWxsYS5jb20iLCJzdWIiOiJtYWlsdG86YWRtaW5AZXhhbXBsZS5jb20iLCJleHAiOiIxNDYzMDAxMzQwIn0

  • The final element is the signature. This is an ECDH signature of the JWT base string created using your VAPID private key. This signature is URL safe base64 encoded, “=” padding removed, and again joined to the base string with an a ‘.’ delimiter.

    Generating the signature depends on your language and library, but is done by the ecdsa algorithm using your private key. If you’re interested in how it’s done in Python or Javascript, you can look at the code in https://github.com/web-push-libs/vapid.

    Since your private key will not match the one we’ve generated, the signature you see in the last part of the following example will be different.

    eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHRwczovL3B1c2guc2VydmljZXMubW96aWxsYS5jb20iLCJzdWIiOiJtYWlsdG86YWRtaW5AZXhhbXBsZS5jb20iLCJleHAiOiIxNDYzMDAxMzQwIn0.y_dvPoTLBo60WwtocJmaTWaNet81_jTTJuyYt2CkxykLqop69pirSWLLRy80no9oTL8SDLXgTaYF1OrTIEkDow

  • Forming your headers


    The VAPID claim you assembled in the previous section needs to be sent along with your Subscription Update as an Authorization header vapid token. This is a compound key consisting of the JWT token you just created (designated as t=) and the VAPID public key as its value formatted as a URL safe, base64 encoded DER formatted string of the raw key.
    If you like, you can cheat here and use the content of “vapid_public.pem”. You’ll need to remove the “-----BEGIN PUBLIC KEY------” and “-----END PUBLIC KEY-----” lines, remove the newline characters, and convert all “+” to “-” and “/” to “_”.
    The complete token should look like so:

    Authorization: vapid t=eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHRwczovL3B1c2guc2VydmljZXMubW96aWxsYS5jb20iLCJzdWIiOiJtYWlsdG86YWRtaW5AZXhhbXBsZS5jb20iLCJleHAiOiIxNDYzMDAxMzQwIn0.y_dvPoTLBo60WwtocJmaTWaNet81_jTTJuyYt2CkxykLqop69pirSWLLRy80no9oTL8SDLXgTaYF1OrTIEkDow,k=BAS7pgV_RFQx5yAwSePfrmjvNm1sDXyMpyDSCL1IXRU32cdtopiAmSysWTCrL_aZg2GE1B_D9v7weQVXC3zDmnQ

    Note: the header should not contain line breaks. Those have been added here to aid in readability

    You can validate your work against the VAPID test page this will tell you if your headers are properly encoded. In addition, the VAPID repo contains libraries for JavaScript and Python to handle this process for you.

    We’re happy to consider PRs to add libraries covering additional languages.

    Receiving Subscription Information


    Your Application will receive an endpoint and key retrieval functions that contain all the info you’ll need to successfully send a Push message. See Using the Push API for details about this. Your application should send this information, along with whatever additional information is required, securely to the Application Server as a JSON object.

    Such a post back to the Application Server might look like this:

    
    {
        "customerid": "123456",
        "subscription": {
            "endpoint": "https://updates.push.services.mozilla.com/push/v1/gAAAA…",
            "keys": {
                "p256dh": "BOrnIslXrUow2VAzKCUAE4sIbK00daEZCswOcf8m3TF8V…",
                "auth": "k8JV6sjdbhAi1n3_LDBLvA"
             }
        },
        "favoritedrink": "warm milk"
    }
    

    In this example, the “subscription” field contains the elements returned from a fulfilled PushSubscription. The other elements represent additional data you may wish to exchange.

    How you decide to exchange this information is completely up to your organization. You are strongly advised to protect this information. If an unauthorized party gained this information, they could send messages pretending to be you. This can be made more difficult by using a “Restricted Subscription”, where your application passes along your VAPID public key as part of the subscription request. A restricted subscription can only be used if the subscription carries your VAPID information signed with the corresponding VAPID private key. (See the previous section for how to generate VAPID signatures.)

    Subscription information is subject to change and should be considered “opaque”. You should consider the data to be a “whole” value and associate it with your user. For instance, attempting to retain only a portion of the endpoint URL may lead to future problems if the endpoint URL structure changes. Key data is also subject to change. The app may receive an update that changes the endpoint URL or key data. This update will need to be reflected back to your server, and your server should use the new subscription information as soon as possible.

    Sending a Subscription Update Without Data


    Subscription Updates come in two varieties: data free and data bearing. We’ll look at these separately, as they have differing requirements.

    Data Free Subscription Updates

    Data free updates require no additional App Server processing, however your Application will have to do additional work in order to act on them. Your application will simply get a “push” message containing no further information, and it may have to connect back to your server to find out what the update is. It is useful to think of Data Free updates like a doorbell “Something wants your attention.”

    To send a Data Free Subscription, you POST to the subscription endpoint. In the following example, we’ll include the VAPID header information. Values have been truncated for presentation readability.

    
    curl -v -X POST\
      -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHR…"\
      -H "Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzC…"\
      -H "TTL: 0"\
      "https://updates.push.services.mozilla.com/push/v1/gAAAAABXAuZmKfEEyPbYfLXqtPW…"
    

    This should result in an Application getting a “push” message, with no data associated with it.

    To see how to store the Push Subscription data and send a Push Message using a simple server, see our Using the Push API article.

    Data Bearing Subscription Updates


    Data Bearing updates are a lot more interesting, but do require significantly more work. This is because we treat our own servers as potentially hostile and require “end-to-end” encryption of the data. The message you send across the Mozilla Push Server cannot be read. To ensure privacy, however, your application will not receive data that cannot be decoded by the User Agent.

    There are libraries available for several languages at https://github.com/web-push-libs, and we’re happy to accept or link to more.

    The encryption method used by Push is Elliptic Curve Diffie Hellman (ECDH) encryption, which uses a key derived from two pairs of EC keys. If you’re not familiar with encryption, or the brain twisting math that can be involved in this sort of thing, it may be best to wait for an encryption library to become available. Encryption is often complicated to understand, but it can be interesting to see how things work.

    If you’re familiar with Python, you may want to just read the code for the http_ece package. If you’d rather read the original specification, that is available. While the code is not commented, it’s reasonably simple to follow.

    Data encryption summary

    Update: This section describes the ECE Draft-03 “aesgcm” encryption encoding.
    This encoding is currently the most widely supported, however it is out-of-date.
    The core principles remain the same across versions, however the final formatting differs.
    There is some effort being made to have the UA report back the forms of encryption content it supports,
    however that may not be available at time of publication.
    You’re strongly encouraged to use a ECE library when possible.
    • Octet An 8 bit byte of data (between \x00 and \xFF)
    • Subscription data The subscription data to encode and deliver to the Application.
    • Endpoint the Push service endpoint URL, received as part of the Subscription data.
    • Receiver key The p256dh key received as part of the Subscription data.
    • Auth key The auth key received as part of the Subscription data.
    • Payload The data to encrypt, which can be any streamable content between 2 and 4096 octets.
    • Salt 16 octet array of random octets, unique per subscription update.
    • Sender key A new ECDH key pair, unique per subscription update.

    Web Push limits the size of the data you can send to between 2 and 4096 octets. You can send larger data as multiple segments, however that can be very complicated. It’s better to keep segments smaller. Data, whatever the original content may be, is also turned into octets for processing.

    Each subscription update requires two unique items a salt and a sender key. The salt is a 16 octet array of random octets. The sender key is a ECDH key pair generated for this subscription update. It’s important that neither the salt nor sender key be reused for future encrypted data payloads. This does mean that each Push message needs to be uniquely encrypted.

    The receiver key is the public key from the client’s ECDH pair. It is base64, URL safe encoded and will need to be converted back into an octet array before it can be used. The auth key is a shared “nonce”, or bit of random data like the salt.

    Emoji Based Diagram

    Subscription Data Per Update Info Update
    🎯 Endpoint 🔑 Private Server Key 📄Payload
    🔒 Receiver key (‘p256dh’) 🗝 Public Server Key
    💩 Auth key (‘auth’) 📎 Salt
    🔐 Private Sender Key
    ✒️ Public Server Key
    🏭 Build using / derive
    🔓 message encryption key
    🎲 message nonce

    Encryption uses a fabricated key and nonce. We’ll discuss how the actual encryption is done later, but for now, let’s just create these items.

    Creating the Encryption Key and Nonce

    The encryption uses HKDF (Hashed Key Derivation Function) using a SHA256 hash very heavily.

    Creating the secret

    The first HKDF function you’ll need will generate the common secret (🙊), which is a 32 octet value derived from a salt of the auth (💩) and run over the string “Content-Encoding: auth\x00”.
    So, in emoji =>
    🔐 = 🔑(🔒);
    🙊 = HKDF(💩, “Content-Encoding: auth\x00”).🏭(🔐)

    An example function in Python could look like so:

    
    # receiver_key must have "=" padding added back before it can be decoded.
    # How that's done is an exercise for the reader.
    # 🔒
    receiver_key = subscription['keys']['p256dh']
    # 🔑
    server_key = pyelliptic.ECC(curve="prime256v1")
    # 🔐
    sender_key = server_key.get_ecdh_key(base64.urlsafe_b64decode(receiver_key))
    
    #🙊
    secret = HKDF(
        hashes.SHA256(),
        length = 32,
        salt = auth,
        info = "Content-Encoding: auth\0").derive(sender_key)
    

    The encryption key and encryption nonce

    The next items you’ll need to create are the encryption key and encryption nonce.

    An important component of these is the context, which is:

    • A string comprised of ‘P-256’
    • Followed by a NULL (“\x00”)
    • Followed by a network ordered, two octet integer of the length of the decoded receiver key
    • Followed by the decoded receiver key
    • Followed by a networked ordered, two octet integer of the length of the public half of the sender key
    • Followed by the public half of the sender key.

    As an example, if we have an example, decoded, completely invalid public receiver key of ‘RECEIVER’ and a sample sender key public key example value of ‘sender’, then the context would look like:

    # ⚓ (because it's the base and because there are only so many emoji)
    root = "P-256\x00\x00\x08RECEIVER\x00\x06sender"

    The “\x00\x08” is the length of the bogus “RECEIVER” key, likewise the “\x00\x06” is the length of the stand-in “sender” key. For real, 32 octet keys, these values will most likely be “\x00\x20” (32), but it’s always a good idea to measure the actual key rather than use a static value.

    The context string is used as the base for two more HKDF derived values, one for the encryption key, and one for the encryption nonce. In python, these functions could look like so:

    In emoji:
    🔓 = HKDF(💩 , “Content-Encoding: aesgcm\x00” + ⚓).🏭(🙊)
    🎲 = HKDF(💩 , “Content-Encoding: nonce\x00” + ⚓).🏭(🙊)

    
    #🔓
    encryption_key = HKDF(
        hashes.SHA256(),
        length=16,
        salt=salt,
        info="Content-Encoding: aesgcm\x00" + context).derive(secret)
    
    # 🎲
    encryption_nonce = HDKF(
        hashes.SHA256(),
        length=12,
        salt=salt,
        info="Content-Encoding: nonce\x00" + context).derive(secret)
    

    Note that the encryption_key is 16 octets and the encryption_nonce is 12 octets. Also note the null (\x00) character between the “Content-Encoding” string and the context.

    At this point, you can start working your way through encrypting the data 📄, using your secret 🙊, encryption_key 🔓, and encryption_nonce 🎲.

    Encrypting the Data

    The function that does the encryption (encryptor) uses the encryption_key 🔓 to initialize the Advanced Encryption Standard (AES) function, and derives the Galois/Counter Mode (G/CM) Initialization Vector (IV) off of the encryption_nonce 🎲, plus the data chunk counter. (If you didn’t follow that, don’t worry. There’s a code snippet below that shows how to do it in Python.) For simplicity, we’ll presume your data is less than 4096 octets (4K bytes) and can fit within one chunk.
    The IV takes the encryption_nonce and XOR’s the chunk counter against the final 8 octets.

    
    def generate_iv(nonce, counter):
        (mask,) = struct.unpack("!Q", nonce[4:])  # get the final 8 octets of the nonce
        iv = nonce[:4] + struct.pack("!Q", counter ^ mask)  # the first 4 octets of nonce,
                                                         # plus the XOR'd counter
        return iv
    

    The encryptor prefixes a “\x00\x00” to the data chunk, processes it completely, and then concatenates its encryption tag to the end of the completed chunk. The encryptor tag is a static string specific to the encryptor. See your language’s documentation for AES encryption for further information.

    
    def encrypt_chunk(chunk, counter, encryption_nonce, encryption_key):
        encryptor = Cipher(algorithms.AES(encryption_key),
                          modes.GCM(generate_iv(encryption_nonce,
                                                  counter))
        return encryptor.update(b"\x00\x00" + chunk) +
               encryptor.finalize() +
               encryptor.tag
    
    def encrypt(payload, encryption_nonce, encryption_key):
        result = b""
        counter = 0
        for i in list(range(0, len(payload)+2, 4096):
            result += encrypt_chunk(
                payload[i:i+4096],
                counter,
                encryption_key,
                encryption_nonce)
    

    Sending the Data

    Encrypted payloads need several headers in order to be accepted.

    The Crypto-Key header is a composite field, meaning that different things can store data here. There are some rules about how things should be stored, but we can simplify and just separate each item with a semicolon “;”. In our case, we’re going to store three things, a “keyid”, “p256ecdsa” and “dh”.

    “keyid” is the string “p256dh”. Normally, “keyid” is used to link keys in the Crypto-Key header with the Encryption header. It’s not strictly required, but some push servers may expect it and reject subscription updates that do not include it. The value of “keyid” isn’t important, but it must match between the headers. Again, there are complex rules about these that we’re safely ignoring, so if you want or need to do something complex, you may have to dig into the Encrypted Content Encoding specification a bit.

    “p256ecdsa” is the public key used to sign the VAPID header (See Forming your Headers). If you don’t want to include the optional VAPID header, you can skip this.

    The “dh” value is the public half of the sender key we used to encrypt the data. It’s the same value contained in the context string, so we’ll use the same fake, stand-in value of “sender”, which has been encoded as a base64, URL safe value. For our example, the base64 encoded version of the string ‘sender’ is ‘c2VuZGVy’

    Crypto-Key: p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh

    The Encryption Header contains the salt value we used for encryption, which is a random 16 byte array converted into a base64, URL safe value.

    Encryption: keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA

    The TTL Header is the number of seconds the notification should stay in storage if the remote user agent isn’t actively connected. “0” (Zed/Zero) means that the notification is discarded immediately if the remote user agent is not connected; this is the default. This header must be specified, even if the value is “0”.

    TTL: 0

    Finally, the Content-Encoding Header specifies that this content is encoded to the aesgcm standard.

    Content-Encoding: aesgcm

    The encrypted data is set as the Body of the POST request to the endpoint contained in the subscription info. If you have requested that this be a restricted subscription and passed your VAPID public key as part of the request, you must include your VAPID information in the POST.

    As an example, in python:

    headers = {
        'crypto-key': 'p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh',
        'content-encoding': 'aesgcm',
        'encryption': 'keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA',
        'ttl': 0,
    }
    requests.post(subscription_info['subscription']['endpoint'],
                  data=encrypted_data,
                  headers=headers)
    

    A successful POST will return a response of 201, however, if the User Agent cannot decrypt the message, your application will not get a “push” message. This is because the Push Server cannot decrypt the message so it has no idea if it is properly encoded. You can see if this is the case by:

    • Going to about:config in Firefox
    • Setting the dom.push.loglevel pref to debug
    • Opening the Browser Console (located under Tools > Web Developer > Browser Console menu.

    When your message fails to decrypt, you’ll see a message similar to the following
    The debugging console displaying "The service worker for scope 'https://mozilla-services.github.io/WebPushDataTestPage/' encountered an error decryption the a push message:, with a message and where to look for more info

    You can use values displayed in the Web Push Data Encryption Page to audit the values you’re generating to see if they’re similar. You can also send messages to that test page and see if you get a proper notification pop-up, since all the key values are displayed for your use.

    You can find out what errors and error responses we return, and their meanings by consulting our server documentation.

    Subscription Updates


    Nothing (other than entropy) lasts forever. There may come a point where, for various reasons, you will need to update your user’s subscription endpoint. There are any number of reasons for this, but your code should be prepared to handle them.

    Your application’s service worker will get a onpushsubscriptionchange event. At this point, the previous endpoint for your user is now invalid and a new endpoint will need to be requested. Basically, you will need to re-invoke the method for requesting a subscription endpoint. The user should not be alerted of this, and a new endpoint will be returned to your app.

    Again, how your app identifies the customer, joins the new endpoint to the customer ID, and securely transmits this change request to your server is left as an exercise for the reader. It’s worth noting that the Push server may return an error of 410 with an errno of 103 when the push subscription expires or is otherwise made invalid. (If a push subscription has expired several months ago, the server may return a different errno value.

    Conclusion

    Push Data Encryption can be very challenging, but worthwhile. Harder encryption means that it is more difficult for someone to impersonate you, or for your data to be read by unintended parties. Eventually, we hope that much of this pain will be buried in libraries that allow you to simply call a function, and as this specification is more widely adopted, it’s fair to expect multiple libraries to become available for every language.

    See also:

    1. WebPush Libraries: A set of libraries to help encrypt and send push messages.
    2. VAPID lib for python or javascript can help you understand how to encode VAPID header data.

    Footnotes:

    1. Technically, you don’t need to strip the whitespace from JWS tokens. In some cases, JWS libraries take care of that for you. If you’re not using a JWS library, it’s still a very good idea to make headers and header lines as compact as possible. Not only does it save bandwidth, but some systems will reject overly lengthy header lines. For instance, the library that autopush uses limits header line length to 16,384 bytes. Minus things like the header, signature, base64 conversion and JSON overhead, you’ve got about 10K to work with. While that seems like a lot, it’s easy to run out if you do things like add lots of extra fields to your VAPID claim set.

    Stolen Passwords Used to Break into Firefox Accounts

    We recently discovered a pattern of suspicious logins to Firefox Accounts. It appears that an attacker with access to passwords from data breaches at other websites has been attempting to use those passwords to log into users’ Firefox Accounts. In some cases where a user reused their Firefox Accounts password on another website and that website was breached, the attacker was able to take the password from the breach and use it to log into the user’s Firefox Account.

    We’ve already taken steps to protect our users. We automatically reset the passwords of users whose accounts were broken into. We also notified these users with instructions on how to regain access to their accounts and further protect themselves going forward. This investigation is ongoing and we will notify users if we discover unauthorized activity on their account.

    User security is paramount to us. It is part of our mission to help build an Internet that truly puts people first and where individuals are empowered, safe and independent. Events like this are a good reminder of the importance of good password hygiene. Using strong and different passwords across websites is important for staying safe online. We’re also working on additional rate-limiting and other security mechanisms to provide additional protection for our users.

    Using VAPID with WebPush

    This post continues discussion about using the evolving WebPush feature.
    Updated Jul, 21st 2016 to fix incorrect usage of “aud” in VAPID header
    Updated Oct, 6th 2020 to correct push server URL

    One of the problems with offering a service that doesn’t require identification is that it’s hard to know who’s responsible for something. For instance, if a consumer is having problems, or not using the service correctly, it is a challenge to contact them. One option is to require strong identification to use the service, but there are plenty of reasons to not do that, notably privacy.

    The answer is to have each publisher optionally identify themselves, but how do we prevent everyone from saying that they’re something popular like “CatFacts”? The Voluntary Application Server Identification for Web Push (VAPID) protocol was drafted to try and answer that question.

    Making a claim

    VAPID uses JSON Web Tokens (JWT) to carry identifying information. The core of the VAPID transaction is called a “claim”. A claim is a JSON object containing several common fields. It’s best to explain using an example, so let’s create a claim from a fictional CatFacts service.
    {
    "aud": "https://updates.push.services.mozilla.com",
    "exp": 1458679343,
    "sub": "mailto:webpush_ops@catfacts.example.com"
    }

    aud
    The “audience” is the destination URL of the push service.
    exp
    The “expiration” date is the UTC time in seconds when the claim should expire. This should not be longer than 24 hours from the time the request is made. For instance, in Javascript you can use: Math.floor(Date.now() * .001 + 86400).
    sub
    The “subscriber” is the primary contact email for this subscription. You’ll note that for this, we’ve used a generic email alias rather than a specific person. This approach allows multiple people to be alerted, or assigning a new person without having to change code.

    Note: JWT allows for additional fields to be provided, the above are just the “known” set. Feel free to add any additional information that you may want to include when the Push Service needs to contact you. (e.g. AMI-ID of the publishing machine, original customer ID, etc.) Be advised, that you have a limited amount of information you can put in your headers. For instance, Apache allows only 4K for all header information. Keeping items short is to your benefit)

    I’ve added spaces and new lines to make things more readable. JWT objects normally strip those out.

    Signing and Sealing

    A JWT object actually has three parts: a standard header, the claim (which we just built), and a signature.

    The header is very simple and is standard to any VAPID JWT object.

    {"typ": "JWT","alg":"ES256"}

    If you’re curious, typ is the “type” of object (a “JWT”), and alg is the signing algorithm to use. In our case, we’re using Elliptic Curve Cryptography based on the NIST P-256 curve (or “ES256”).

    We’ve already discussed what goes in the claim, so now, there’s just the signature. This is where things get complicated.

    Here’s code to sign the claim using Python 2.7.

    
    # This Source Code Form is subject to the terms of the Mozilla Public
    # License, v. 2.0. If a copy of the MPL was not distributed with this
    # file, You can obtain one at http://mozilla.org/MPL/2.0/.
    
    import base64
    import time
    import json
    
    import ecdsa
    from jose import jws
    
    
    def make_jwt(header, claims, key):
        vk = key.get_verifying_key()
        jwt = jws.sign(
            claims,
            key,
            algorithm=header.get("alg", "ES256")).strip("=")
        # The "0x04" octet indicates that the key is in the
        # uncompressed form. This form is required by the
        # server and DOM API. Other crypto libraries
        # may prepend this prefix automatically.
        raw_public_key = "\x04" + vk.to_string()
        public_key = base64.urlsafe_b64encode(raw_public_key).strip("=")
        return (jwt, public_key)
    
    
    def main():
        # This is a standard header for all VAPID objects:
        header = {"typ": "JWT", "alg": "ES256"}
    
        # These are our customized claims.
        claims = {"aud": "https://updates.push.services.mozilla.com",
                  "exp": int(time.time()) + 86400,
                  "sub": "mailto:webpush_ops@catfacts.example.com"}
    
        my_key = ecdsa.SigningKey.generate(curve=ecdsa.NIST256p)
        # You can store the private key by writing
        #   my_key.to_pem() to a file.
        # You can reload the private key by reading
        #   my_key.from_pem(file_contents)
    
        (jwt, public_key) = make_jwt(header, claims, my_key)
    
        # Return the headers we'll use.
        headers = {
            "Authorization": "Bearer %s" % jwt,
            "Crypto-Key": "p256ecdsa=%s" % public_key,
        }
    
        print json.dumps(headers, sort_keys=True, indent=4)
    
    
    main()
    

    There’s a little bit of cheating here in that I’m using the “python ecdsa” library and JOSE‘s jws library, but there are similar libraries for other languages. The important bit is that a key pair is created.

    This key pair should be safely retained for the life of the subscription. In most cases, just the private key can be retained since the public key portion can be easily derived from it. You may want to save both private and public keys since we’re working on a dashboard that will use your public key to let you see info about your feeds.

    The output of the above script looks like:

    {
        "Authorization": "Bearer eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJodHRwczovL2NhdGZhY3RzLmV4YW1wbGUuY29tIiwiZXhwIjoxNDU4Njc5MzQzLCJzdWIiOiJtYWlsdG86d2VicHVzaF9vcHNAY2F0ZmFjdHMuZXhhbXBsZS5jb20ifQ.U8MYqcQcwFcK2UkeiISahgZFvaOw56ZQvHYZc4zXC2Ed48-lk3MoYExGagKLwr4lSdbARZEbblAprQfXlap3jw",
        "Crypto-Key": "p256ecdsa=EJwJZq_GN8jJbo1GGpyU70hmP2hbWAUpQFKDByKB81yldJ9GTklBM5xqEwuPM7VuQcyiLDhvovthPIXx-gsQRQ=="
    }
    

    These are the HTTP request headers that you should include in the POST request when you send a message. VAPID uses these headers to identify a subscription.

    The “Crypto-Key” header may contain many sub-components, separated by a semi-colon (“;”). You can insert the “p256ecdsa” value, which contains the public key, anywhere in that list. This header is also used to relay the encryption key, if you’re sending a push message with data. The JWT is relayed in the “Authorization” header as a “Bearer” token. The server will use the pubic key to check the signature of the JWT and ensure that it’s correct.

    Again, VAPID is purely optional. You don’t need it if you want to send messages. Including VAPID information will let us contact you if we see a problem. It will also be used for upcoming features such as restricted subscriptions, which will help minimize issues if the endpoint is ever lost, and the developer dashboard, which will provide you with information about your subscription and some other benefits. We’ll discuss those more when the features becomes available. We’ve also published a few tools that may help you understand and use VAPID. The Web Push Data Test Page (GitHub Repo) can help library authors develop and debug their code by presenting “known good” values. The VAPID verification page (GitHub Repo) is a simpler, “stand alone” version that can test and generate values.

    As always, your input is welcome.

    Updated to spell out what VAPID stands for.

    TTL Header Requirement Relaxed for WebPush

    A few weeks ago, we rolled out a version of the WebPush server that required a HTTP TTL header to be included with each subscription update. A number of publishers reported issues with the sudden change, and we regret not properly notifying all parties.

    We’re taking steps to solve that problem, including posting here and on Twitter.

    To help make it easier for publishers and programmers to experiment with WebPush, we have temporarily relaxed the TTL requirement. If a recipient’s computer is actively connected, the server will accept and deliver a WebPush notification submitted without a HTTP TTL header. This is similar to the older behavior where updates would be accepted with a presumed TTL value of 0 (zero), because the server was able to immediately deliver the notification. If the recipient is not actively connected and the TTL header is missing, the server will return a HTTP status of 400 and the following body:

    {“status”: 400, “errno”: 111, “message”: “Missing TTL Header, see: https://webpush-wg.github.io/webpush-protocol/#rfc.section.6.2”}

    It’s our hope that this error is descriptive enough to help programmers understand what the error is and how to best resolve it. Of course, we’re always open to suggestions on how we can improve this.

    Also, please note that we consider this a temporary fix. We prefer to stay as close to the standard as possible, and will eventually require a TTL header for all WebPush notifications. We’re keeping an eye on our error logs and once we see the number of incorrect calls fall below a certain percentage of the overall calls, we’ll announce the end of life of this temporary fix.

    WebPush’s New Requirement: TTL Header

    WebPush is a new service where web applications can receive notification messages from servers. WebPush is available in Firefox 45 and later and will be available in Firefox for Android soon. Since it’s a new technology, the WebPush specification continues to evolve. We’ve been rolling out the new service and we saw that many updates were not reaching their intended audience.

    Each WebPush message has a TTL (Time To Live), which is the number of seconds that a message may be stored if the user is not immediately available. This value is specified as a TTL: header provided as part of the WebPush message sent to the push server. The original draft of specification stated that if the header is missing, the default TTL is zero (0) seconds. This means if the TTL header was omitted, and the corresponding recipient was not actively connected, the message was immediately discarded. This was probably not obvious to senders since the push server would return a 201 Success status code.

    Immediately discarding the message if the user is offline is probably not what many developers expect to happen. The working group decided that it was better for the sender to explicitly state the length of time that the message should live. The Push Service may still limit this TTL to it’s own maximum. In any case, the Push Service server would return the actual TTL in the POST response.

    You can still specify a TTL of zero, but it will be you setting it explicitly, rather than the server setting it implicitly. Likewise if you were to specify TTL: 10000000, and the Push Service only supports a maximum TTL of 5,184,000 seconds (about one month), then the Push Service would respond with a TTL:5184000

    As an example,

    
    curl -v -X POST https://updates.push.services.mozilla.com/push/LongStringOfStuff \
    -H "encryption-key: keyid=p256dh;dh=..." \
    -H "encryption: keyid=p256dh;salt=..." \
    -H "content-encoding: aesgcm128" \
    -H "TTL: 60" \
    --data-binary @encrypted.data
    
    > POST /push/LongStringOfStuff HTTP/1.1
    > User-Agent: curl/7.35.0
    > Host: updates.push.services.mozilla.com
    > Accept: */*
    > encryption-key: ...
    > encryption: ...
    > content-encoding: aesgcm128
    > TTL: 60    
    > Content-Length: 36
    > Content-Type: application/x-www-form-urlencoded
    >
    * upload completely sent off: 36 out of 36 bytes
    < HTTP/1.1 201 Created
    < Access-Control-Allow-Headers: content-encoding,encryption,...
    < Access-Control-Allow-Methods: POST,PUT
    < Access-Control-Allow-Origin: *
    < Access-Control-Expose-Headers: location,www-authenticate
    < Content-Type: text/html; charset=UTF-8
    < Date: Thu, 18 Feb 2016 20:33:55 GMT
    < Location: https://updates.push.services.mozilla.com...
    < TTL: 60
    < Content-Length: 0
    < Connection: keep-alive
    
    

    In this example, the message would be held for up to 60 seconds before either the recipient reconnected, or the message was discarded.

    If you fail to include a TTL header, the server will respond with an HTTP status code of 400. The result will be similar to:

    
    < HTTP/1.1 400 Bad Request
    < Access-Control-Allow-Headers: content-encoding,encryption,...
    < Access-Control-Allow-Methods: POST,PUT
    < Access-Control-Allow-Origin: *
    < Access-Control-Expose-Headers: location,www-authenticate
    < Content-Type: application/json
    < Date: Fri, 19 Feb 2016 00:46:43 GMT
    < Content-Length: 84
    < Connection: keep-alive
    <
    {"errno": 111, "message": "Missing TTL header", "code": 400, "error": "Bad Request"}
    

    The returned error will contain a JSON block that describes what went wrong. Refer to our list of error codes for more detail.

    We understand that the change to require the TTL header may have not reached everyone, and we apologize about that. We’re going to be “softening” the requirement soon. The server will return a 400 only if the remote client is not immediately connected, otherwise we will accept the WebPush with the usual 201. Please understand that this relaxation of the spec is temporary and we will return to full specification compliance in the near future.

    We’re starting up a Twitter account, @MozillaWebPush, where we’ll post announcements, status, and other important information related to our implementation of WebPush. We encourage you to follow that account.

    Shutting down the legacy Sync service

    In response to strong user uptake of Mozilla’s new Sync service powered by Firefox Accounts, earlier this year we announced a plan to transition users off of our legacy Sync infrastructure and onto the new product.  With this migration now well under way, it is time to settle the details of a graceful end-of-life for the old service.

    We will shut down the legacy Sync service on September 30th 2015.

    We encourage all users of the old service to upgrade to a Firefox Account, which offers a simplified setup process, improved availability and reliability, and the possibility of recovering your data even if you lose all of your devices.

    Users on Firefox 37 or later are currently being offered a guided migration process to make the experience as seamless as possible.  Users on older versions of Firefox will see a warning notice and will be able to upgrade manually.  Users running their own Sync server, or using a Sync service hosted by someone other than Mozilla, will not be affected by this change.

    Update: shutdown of the legacy Sync service has been completed.   Users who are yet to migrate off the service will be offered the guided upgrade experience until Firefox 44.  Firefox 44 and later will automatically and silently disconnect from legacy Sync.

    We are committed to making this transition as smooth as possible for Firefox users.  If you have any questions, comments or concerns, don’t hesitate to reach out to us on sync-dev@mozilla.org or in #sync on Mozilla IRC.

     

    FAQ

     

    • What will happen on September 30th 2015?

    After September 30th, we will decommission the hardware hosting the legacy Sync service and discard all data stored therein.  The corresponding DNS names will be redirected to static error pages, to ensure that appropriate messaging is provided for users who have yet to upgrade to the new service.

    • What’s the hurry? Can’t you just leave it running in maintenance mode?

    Unfortunately not.  While we want to ensure as little disruption as possible for our users, the legacy Sync service is hosted on aging hardware in a physical data-center and incurs significant operational costs.  Maintaining the service beyond September 30th would be prohibitively expensive for Mozilla.

    • What about Extended Support Release (ESR)?

    Users on the ESR channel have support for Firefox Accounts and the new Sync service as of Firefox 38.  Previous ESR versions reach end-of-life in early August and we encourage all users to upgrade to the latest version.

    • Will my data be automatically migrated to the new servers?

    No, the strong encryption used by both Sync systems means that we cannot automatically migrate your data on the server.  Once you complete your account upgrade, Firefox will re-upload your data to the new system (so if you have a lot of bookmarks, you may want to ensure you’re on a reliable network connection).

    • Are there security considerations when upgrading to the new system?

    Both the new and old Sync systems provide industry-leading security for your data: client-side end-to-end encryption of all synced data, using a key known only to you.

    In legacy Sync this was achieved by using a complicated pairing flow to transfer the encryption key between devices.  With Firefox Accounts we have replaced this with a key securely derived from your account password.  Pick a strong password and you can remain confident that your synced data will only be seen by you.

    • Does Mozilla use my synced data to market to me, or sell this data to third parties?

    No.  Our handling of your data is governed by Mozilla’s privacy policy which does not allow such use.  In addition, the strong encryption provided by Sync means that we cannot use your synced data for such purposes, even if we wanted to.

    • Is the new Sync system compatible with Firefox’s master password feature?

    Yes.  There was a limitation in previous versions of Firefox that prevented Sync from working when a master password was enabled, but this has since been resolved.  Sync is fully compatible with the master password feature in the latest version of Firefox.

    • What if I am running a custom or self-hosted Sync server?

    This transition affects only the default Mozilla-hosted servers.  If you are using a custom or self-hosted server setup, Sync should continue to work uninterrupted and you will not be prompted to upgrade.

    However, the legacy Sync protocol code inside Firefox is no longer maintained, and we plan to begin its removal in 2016.  You should consider migrating your server infrastructure to use the new protocols; see below.

    • Can I self-host the new system?

    Yes, either by hosting just the storage servers or by running a full Firefox Accounts stack.  We welcome feedback and contributions on making this process easier.

    • What if I’m using a different browser (e.g. SeaMonkey, Pale Moon, …)?

    Your browser vendor may already provide alternate hosting.  If not, you should consider hosting your own server to ensure uninterrupted functionality.

    The changing face of software quality

    TL;DR

    QA as a function in software is and has been changing. It is less about validating changes and writing test plans and more about pushing quality forward through tools, automation, and process refinement. We must work to improve the QA phase (everything between “dev complete” and “ready for deployment”), and quality throughout the whole life-cycle. In doing so QA becomes a facilitator, an ambassador, a shepherd of code, providing simple, confident, painless ways of getting things out the door. We may become reliability engineers, production engineers, tools engineers, infrastructure engineers, and/or release engineers. In smaller teams you may be many of these things, in bigger ones they may be distinct roles. The line between QA and Ops may blur, as might the line between Dev and QA. Manual QA still plays a large role, but it is in addition to, not the standard. It is another tool in the toolbox.

    What is QA and Software Quality really?

    In my mind the purpose of QA is not merely to find bugs or validate changes before release, but to ensure that the product that our users receive is of high quality. That can mean many different things, and the number of bugs is definitely a part of it, but I think that is a simplification of the definition of software quality. QA is changing in many companies these days, and we’re making a big push this year in Mozilla’s Cloud Services team to redefine what QA and quality means.

    Quality software, to me, is about happy users. It is users who love and care about your product, users who will evangelize your product, and users that contribute feedback to make your product better. If you have those users, then you have a great product.

    As QA we don’t write code, we don’t run the servers, we don’t decide what to build nor how it should look, but we do make sure that all of those things are up to the standards that we all set for ourselves and our products. If our goal in software is to make our users happy then giving them a quality experience that solves a problem for them and makes their lives easier is how we achieve that goal. This is what software is all about, and is something that everyone strives for. Quality really is everyone’s concern. It should not be an afterthought and it shouldn’t fall solely on QA. It is a mindset that needs to be ingrained in engineers, product managers, designers, managers, and operations. It is a part of company culture.

    Aspects of Quality

    When talking about quality it helps to have a clear idea what we mean when we say software should be of high quality.

    • Performance
      • load times
      • animation/ui smoothness
      • responsiveness to interactions
    • Stability
      • consistency, is it the same experience every time I use the product
      • limited downtime
    • Functionality
      • does it do what we said it would
      • does it do so ‘correctly’ (limited bugs)
      • does it solve a problem
    • Usability
      • is it simple, attractive, unobtrusive
      • does it frustrate or annoy
      • does it make sense
      • does it do what the user wants and expects

    These aspects involve everyone. You can lobby for other aspects that you find important, and please do, this list is by no means exhaustive.

    Great, so we’re thinking about ways that we can deliver quality software beyond test runs and checklists, and what it means to do so, but there’s a bit of an issue.

    The Grand Quality Renaissance

    A lot of what we do in traditional QA is 1:1 interactions with projects when there are changes ready to be tested for release. We check functionality, we check changes, we check for regression, we check for performance and load. This is ‘fine’, by most standards. It gets the job done, the product is validated, and it goes for release. All is good… Except that it doesn’t scale, the work you do doesn’t provide ongoing value to the team, or the company. As soon as you complete your testing that work is ‘lost’ and then you start from scratch with the next release.

    In Cloud Services we have upwards of 20 projects that are somewhere between maintenance mode, active, or in planning. We simply don’t have the head count (5 currently) to manage all of those projects simultaneously, and even if we did, plans change. We have to react to the ongoing changes in the team, the company, and the world (well, at least as it relates to our products). Sometimes we won’t be able to react because 1:1 interactions aren’t flexible enough. Okay, simple answer: lets grow the team. That may work at times, but it also takes a lot of time, a lot of money, and even worse you might put yourself in a position where you have the head count to manage 20 projects, but suddenly there’s only 10 projects that we care about going forward and then we have to let some people go. That’s not fair to anyone – we need a better solution.

    What we’re talking about, really, is how can we do more with less, how can we be more effective with the people and resources that we have. We need to do work that is effective and applicable to either:

    • many projects at once (which can be hard to do for tests, unless there is a lot of crossover, but possible with things like infrastructure and build tools), or
    • can be reused by the same project each time a change comes up for release (much more realistic for testing).

    Once we have that work in place, then we know that if we grow the team it is deliberate, not reactionary. Being in control is good!

    That doesn’t really mean anything concrete, so let’s put some goals in place give us something to work towards.

    • Improve 1:1 Person:Project effectiveness, scale without sacrificing quality
    • Implement quality measurement tooling and practices to track changes and improvements
    • Facilitate team scaling by producing work of ongoing value via automated tests and test infrastructure
    • Reduce deployment time/pain by automating build and release pipeline
    • Reduce communication overhead and back and forth between teams by shorten feedback loops
    • Increase confidence in releases through stability and results from our build/tools/test/infrastructure work
    • Reframe what quality means in cloud services through leadership and driving forward change

    Achieving those goals means we as QA are more effective, but the side effects to the entire organization are numerous. We’ll see more transparency between teams, developers will receive feedback faster and be more confident in their work, they take control of their own builds and test runs removing back and forth communication delays, operations can one-click release to production with fewer concerns about late nights and rollbacks, more focus can be applied to upholding the aspects of quality by all rather than fighting fires, and ideally productivity, quality, and happiness go up (how do we gather some telemetry on that?).

    We’re putting together the concrete plan for what this will look like at Mozilla’s Cloud Services. It will take a while to achieve these goals, and there will undoubtedly be issues along the way, but the end result is much more stable, reliable, and consistent. That last one is particularly important. If we’re consistent then we can measure ourselves, which leads to us being predictable. Being predictable, statistically speaking, means our decision making is much more informed. At a higher level, our focus shifts to making the best product we can, delivering it quickly, and iterating on feedback. We learn from our users and integrate their feedback into our products, without builds or bugs or tools or process getting in the way. We make users happy. When it comes down to it, that’s really what this is all about – that’s the utopia. Let’s build our utopia!

    Combain Deal Helps Expand Mozilla Location Service

    Last week, we signed an agreement with Combain AB, a Swedish company dedicated to accurate global positioning through wireless and cell tower signals.

    The agreement lets Mozilla use Combain’s service as a fallback for Mozilla Location Service. Additionally, we’re exchanging data from our stumbling efforts to improve the overall quality of our databases.

    We’re excited about both parts of this deal.

    • Having the ability to fall back to another provider in situations where we feel that our data set is too sparse to provide an acceptable location gives Mozilla Location Service users some extra confidence in the values provided by the service. We have also extended the terms of our deal with Google to use the Google Location Service on Firefox products, giving us excellent location tools across our product line.
    • Exchanging data helps us build out our database. Ultimately, we want this database to be available to the community, allowing companies such as Combain to focus on their algorithmic analysis, service guarantees and improvements to geolocation without having to deal with the initial barrier of gathering the data. We believe this data should be a public resource and are building it up with that goal in mind. Exchange programs such as the one we’ve engaged in with Combain help us get closer to the goal of comprehensive coverage while allowing us to do preliminary testing of our data exchange process with privacy-respecting partners.

    We’ve got a long way to go to build out our map and all of Mozilla Location Service, and we hope to announce more data exchange agreements like this in the future. You can watch our progress (fair warning – you can lose a lot of time poking at that map!) and contribute to our efforts by enabling geolocation stumbling in the Firefox for Android browser. Let’s map the world!

    February Show and Tells

    Each week the Cloud Services team hosts a “Show and Tell” where people in the team share the interesting stuff they’ve been doing. This post wraps up months show and tells so that we can share them with everyone.

    Jan 28

    • Andy talks about blogging the show and tells
    • Zach showed his work putting avatars in the browser UI
    • Kit showed using Go interfaces to mock network interfaces

    Jan 28th recording (29 min, 21 seconds)

    Feb 11

    • Ian showed how to super power the permissions on your site via an addon
    • Sam discussed error handling results from usenix.

    Feb 11th recording (12 min, 45 seconds)

    Unfortunately the next two show and tells were cancelled. Hoping for a more full list in March.

    What’s Hawk authentication and how to use it?

    At Mozilla, we recently had to implement the Hawk authentication scheme for a number of projects, and we came up creating two libraries to ease integration into Pyramid and Node.js apps.

    But maybe you don’t know Hawk?

    Hawk is a relatively new technology, crafted by one of the original OAuth specification authors, that intends to replace the 2-legged OAuth authentication scheme using a simpler approach.

    It is an authentication scheme for HTTP, built around HMAC digests of requests and responses.

    Every authenticated client request has an Authorization header containing a MAC (Message Authentication Code) and some additional metadata, then each server response to authenticated requests contains a Server-Authorization header thatauthenticates the response, so the client is sure it comes from the right server.

    Exchange of the hawk id and hawk key

    To sign the requests, a client needs to retrieve a token id and a token key from the server.

    The excellent team behind Firefox Accounts put together a scheme to do that, if you are not interested in the details, jump directly to the next section.

    When your server application needs to send you the credentials, it will return it inside a specific Hawk-Session-Token header. This token can be derived to split this string in two values (hawk id and hawk key) that you will use to sign your next requests.

    In order to get the hawk credentials, you’ll need to:

    First, do an HKDF derivation on the given session token. You’ll need to use the following parameters:

    key_material = HKDF(hawk_session, "", 'identity.mozilla.com/picl/v1/sessionToken', 32*2)
    

    The identity.mozilla.com/picl/v1/sessionToken is a reference to this way of
    deriving the credentials, not an actual URL.

    Then, the key material you’ll get out of the HKDF need to be separated into two parts, the first 32 hex caracters are the hawk id, and the next 32 ones are the hawk key.

    HTTPie

    To showcase APIs in the documentation, we like to use HTTPie, a curl-replacement with a nicer
    API, built around the python requests library.

    Luckily, HTTPie allows you to plug different authentication schemes for it, so we created a wrapper around mohawk to add hawk support to the requests lib.

    Doing hawk requests in your terminal is now as simple as:

    $ pip install requests-hawk httpie
    $ http GET localhost:5000/registration --auth-type=hawk --auth='id:key'

    In addition, it helps crafting requests using the requests library:

    Alternatively, if you don’t have the token id and key, you can pass the hawk session token presented earlier and the lib will take care of the derivation for you.

    Integrate with python pyramid apps

    If you’re writing pyramid applications, you’ll be happy to learn that Ryan Kelly put together a library that makes Hawk work as an Authentication provider for them. I’m shocked how simple it is to use it.

    Here is a demo of how we implemented it for Daybed, a form validation and data storage API:

    The get_hawk_id function is a function that takes a request and a tokenid and returns a tuple of (token_id, token_key).

    How you want to store the tokens and retrieve them is up to you. The default implementation (e.g. if you don’t pass a decode_hawk_id function) decodes the key from the token itself, using a master secret on the server (so you don’t need to store anything).

    Integrate with node.js Express apps

    We had to implement Hawk authentication for two node.js projects and finally came up factorizing everything in a library for express, named express-hawkauth.

    In order to plug it in your application, you’ll need to use it as a middleware:

    If you pass the createSession parameter, all non-authenticated requests will create a new hawk session and return it with the response, in the Hawk-Session-Token header.

    If you want to only check a valid hawk session exists (without creating a new one), just create a middleware which doesn’t have any createSession parameter defined.

    Some reference implementations

    As a reference, here is how we’re using the libraries I’m talking about, in case that helps you to integrate with your projects.