PiCL Crypto Review

I’ve been working on the security design for the next version of Firefox Sync, which is the bit that keeps your bookmarks/history/saved-passwords/etc synchronized between Firefoxes on all your various devices. The working title is “PiCL”, which stands for “Profile In the CLoud”. In the coming year, this will be deployed to roughly 500 million Firefox users.

I’m looking for feedback on our design. It involves key-stretching (PBKDF2 and scrypt), secure handling of password-derived keys, SRP, and a healthy distrust of SSL. If you’re interested, read on!

Unlike the previous Sync design, in which a full-strength random key was transferred between paired devices with a J-PAKE-based protocol, PiCL is obligated to use a traditional email+password as account credentials. My task has been to provide as much cryptographic protection of the user’s private data as possible, given that constraint. Our goal is to enable the user to have some data that is not visible to Mozilla (we run the servers) or their email-provider (through which a forgotten password can be reset).

I’m focusing on the portion of the system that gets from email+password to a pair of master keys “kA” and “kB” (described below) and a session token. There are a lot of parts you can ignore for now, like how we use kA/kB to encrypt the user’s data later (we’ll use HKDF to get per-datatype AES/HMAC keys), how BrowserID certificates authorize communication with the storage-server, and how the data actually gets synchronized (efficient delta transfer, resolving merge conflicts, etc).

The protocol is mainly described here:

with some additional background here:

The protocol page might be missing some context, so here’s a summary:

  • PICL accounts are defined by an email address and a password
  • PICL will encrypt all user data, with per-datatype keys derived from either a “class-A” or a “class-B” master key (kA and kB).
  • data in class-A is recoverable, even if you forget your password: you can ask us  nicely (via the usual email-based account-reset process) and we’ll tell you kA.
  • data is class-B is not recoverable if you forget the password
  • we run a “keyserver”, which is used during signin. It knows kA and a wrapped form of kB.
  • encrypted data is stored on a “storage server”, and to control it, clients must present a BrowserID assertion, whose certificate is signed by the keyserver. The Session  Token gets you these signatures.

My goal is for the confidentiality of the class-B data to be no weaker than the account password, and for this password never be revealed outside the user’s browser. If your password can’t be brute-forced, your data should remain safe, even if our keyserver is compromised. Class-A data should be no weaker than the typical “service-provider sees everything” approach. We also do a lot of key-stretching on the password, to make a brute-force attack as expensive as possible.

It’s supposed to reject all but the following attacks:

  • a TLS failure during initial account creation allows the MitM to offline brute-force the user’s password
  • anybody can try an online brute-force attack, rate-limited of course
  • the keyserver can do an offline brute-force attack
  • the IdP (who sees the forgot-password email) can reset the account themselves and  learn kA, but not kB
  • compromising the browser itself reveals everything, of course

In particular, the storage servers shouldn’t be any better off than a random stranger, to keep the security perimeter small.

Feel free to send comments to me, or drop by the #picl IRC channel on irc.mozilla.org (I’m “warner” on #picl), or send them to the sync-dev mailing list (https://mail.mozilla.org/listinfo/sync-dev). And please forward this to anyone else you think might be interested.

thanks!
-Brian

13 comments

  1. The docs include the statement “The PiCL client uses the SRP protocol (http://srp.stanford.edu/) to prove that it knows the (stretched) account password without revealing the actual password (or information enabling a brute-force attack) to the server or any eavesdroppers. ”

    But I’ve inferred from docs elsewhere that the server-side date store associated with SRP is (in the general case) brute-forceable (if it’s confidentiality is compromised). Have I understood correctly?

    • Yup, you understand it correctly. The SRP “Verifier” is derived from the SRP password, so an attacker who steals it can use it as a test to see if you’ve guessed the password correctly (you derive a new verifier for each potential password, and compare it to the one you stole from the keyserver). In this way, it behaves a lot like a hashed password.

      The keyserver is (hopefully) the only party who learns the SRP verifier, so they’re the only one who gets this brute-force attack. The verifier is protected by only TLS during account creation and reset-forgot-password: that’s the weakest step of the whole protocol. During password-change it’s also protected by a session key (which comes out of an earlier SRP exchange that proved knowledge of the old password).

      We salt+stretch the real password before feeding it into SRP, to make this brute-force attack as expensive as possible (both for us and for someone who breaks into our keyserver). http://keywrapping.appspot.com/ has some sliders to play around with stretching parameters and attack costs.

  2. Hmm, if I am extra-paranoid, can I upgrade data from class A to class B? (Of course, the server will be able to recover existing data if it has backups, but new data will be unavailable, hopefully, even with the known plaintext.) It would be nice if we can opt-in to Weave-levels of security. I’m assuming some sets of data are class-B only and the client will refuse to downgrade.

    Actually, as far as I can tell, the actual collection-specific decryption isn’t spec’ed yet; though I’m not sure it needs to be at this point. Might be useful to sketch up possible class-A and class-B flows to make sure. Or I’m just dumb and missed it… the document was getting long and I started skimming :p

    • Yeah, the idea is that users get to choose which class their data goes into, and you can switch at any time, with the old-data-still-readable issue you noted. We’re still trying to figure out the defaults, but I suspect it’ll be something like a two-position switch, with position one being “passwords are class-B, everything else is class-A”, and position two being “everything is class-B”. Maybe position one will put history and saved form data (which are somewhat more ephemeral than, say, bookmarks) into class-B too.

      And yeah, we haven’t defined the per-collection crypto yet. General idea is that we derive separate keys for each datatype (one for A, another for B), then do AES+HMAC-SHA256 on the individual records. Some of the encoding schemes we’re looking at provide collection-level integrity (so the server can’t selectively omit or delete records), but most only provide record-level integrity. It doesn’t appear easy to design an efficient re-synchronizable protocol that maintains client-validated full-collection integrity.

  3. Minimal input here, but would it maybe be better to use Mozilla’s Persona instead of an entirely new account? You could incorporate most of those features into it and it’ll be a breeze if someone has used other Mozilla services in the past.

    • This is actually part of the Persona effort :). We had a huge debate about just that, and concluded that we needed more than basic Persona because your IdP (email provider) knows your password, which means they could learn your class-B data, and we’re trying to keep that data secret from the IdP. We could (and should, but currently don’t) use Persona/BrowserID to manage account-reset and email-verification, but we need some other kind of password for class-B encryption.

  4. Slightly off-topic:

    Thank you for alowing us to host our own Firefox Sync server. Keep up the good work!

    • Will we be able to still run our own sync servers?

      • Yes, that’s the plan. We’re also hoping to make it easier to run your own server (fewer moving pieces). You’ll probably have to make some advanced-preferences (“about:config”) change to switch to your own keyserver and storage server URLs.

  5. Morten Andersen

    Hi Brian,

    Sorry for the late comments. I hope they are still useful.

    I primarely focused on the SRP part.

    In general it is an impressive specification. I only have a few comments given below. I apologize in advance if they came out more harsh than was intented, in overall I am as said impressed.

    Initial disclaimer: I am not a cryptographer, just a practioner programmer with a general knowledge and interest in cryptography.

    1) Client side “parallel” key stretching

    Adding a password stretching part to SRP is a great add-on to improve the security if the password file should ever get compromised.

    But I am not really fond of the salt part only being part of the final HKDF. I of course understand your arguments about being able to run the password stretching in parallel, and you do note that it has a “tiny cost in security”. But since it is pretty unorthodox I think it is important to get a “card-carrying cryptographer” to evaluate if “tiny is tiny enough” :-)

    My arguments against it is:

    a) You use the email as “small” per user pseudo-salt, which will of course partially protect against somebody generating a set of “almost”-there rainbow tables with “stretchedPW” values. But the entropy of email addresses are really low I would assume (especially since the email market is dominated by a few extremely large domains – I assume that you have large portion of GMail, Hotmail and Yahoo mail addresses), so I do not think it lives up to normal randomness requirements to a good salt.

    b) You combine 3 different KDF algorithms (PBKDF2, SCrypt and HKDF). It always raises suspecion when something is combined like this. I can not see any obvious problems though, but it just makes me a little bit worried since it is “off the beaten cryptographical path”.

    Because of this I would suggest to reconsider the key stretching to a solution like the below:

    2) “Optimistic” client side key stretching

    I suggest to stay on the beaten path and having the mainSalt as input into the initial KDF. This could be optimized in most cases by just using the statefullness of the browser to store the salt in a client side cookie. If the client has the cookie it can do the key stretching optimistically while waiting for the /auth/start reply. If it does not have the cookie with the salt (or has an invalid salt) it will of course have to wait for the /auth/start reply. But I would assume that most returning clients will have the correct salt, so for those the parallel/optimistic client side stretching will succeed.

    3) Client side optimization (minor)

    Instead of the /auth/start key stretching parallelism critized above, they client could do parallel calculation of A if I am right. This is the only part of SRP that can safely be done before receiving the initial response from the server.

    4) Two salts

    You uses two different salts (mainSalt and srpSalt). If I understand your arguments correctly srpSalt is really not necessary seen from a security view, and is only used to “follow the SRP specification” to avoid confusion.

    But with the number of users you estimate for the system I assume that it makes quite some difference to store, handle, transmit to the client and process an additional unnecessary blob of salt data for each user. So I would say the arguments for having it are not very strong compared to the cost.

    5) Good question you raises on how the “a” and “b” should be generated in the SRP Notes section. Unfortunately I am unable to answer it :-( (but I would like to know the answer).

    6) Missing check for minimum size of random numbers a and b

    According to the original srp.ps section 4.4.3 there are some mimimum requirements that should be checked for a and b.

    a, b > log_g (N)

    I cannot see these checks in your flow diagrams.

    In RFC-5054 it is stated in “2.5.3. Server Key Exchange” and “2.5.4. Client Key Exchange”:

    “..a|b is a random number that SHOULD be at least 256 bits in length”

    I am unsure whether this is another way to state the same requirement to minimum size as in srp.ps. And I am also unsure whether this should be understood in the way that the random numbers should be drawn from the 256-bits random space or whether they should actually be strictly larger than 2^256.

    7) SRP resistance toward small subgroup attacks

    There shall be no doubt that I really like SRP. But as fair as I can tell it is not one of the most analyzed protocols (due to the discussions about the potential patents for it). So I am unsure whether it requires the same subtleties in its implementation as basic DH (Ferguson and Schneier has an in-depth treatment of this in chapter 12 of “Practical Cryptography”, for instance stating that the participants should basically always check “that any recieved value that is supposed to be a power of g should be checked for actually being in the subgroup generated by g (p.215)”.

    I am simply not knowledgeable enough on prime groups and subgroup theory to give you any advice on whether implementation requirements like this also are necessary for SRP. But since the SRP specifications are more than 10 years old, there might be some of the modern knowledge about problematic DH implementations that also apply to SRP (since it is a similar mechanism).

    The only paper I have been able to find on SRP attacks is Feng Hao’s: “On Small Subgroup Non-confinement Attack” (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.190.817). I have only skimmed it and as far as I can see it explicitly states that the described attack is not a problem in the practical world.

    8) Protection against discovery/harvesting of mail addresses/user accounts.

    It is not explicit stated what response the /auth/start will give if it can not find a given user id. To avoid the possiblity for somebody to check whether a given email address exists in the database it should probably generate a “fake” valid response (with a random salt and srpB parameters). I am not entirely sure that it is worth the effort to actually generate a valid srpB parameter given that it will put additional load on the server, but if you want to protect against timing attacks it is necesary to reply to an invalid account request in approximately the same time it takes to reply to a valid account request.

    9) Have you considered contacting Thomas Wu (the SRP inventor) for a review of the PiCL crypto specification. I can see from for instance this blog post in 2012 http://bert-hubert.blogspot.dk/2012/02/on-srp-some-implementation-notes-and.html that he occasionally comments on SRP issues (see his comment at the end).

    Phew, this got longer than I expected when I set out to write it. I apologize for being so verbose and lengthy in my writing :-). I hope you can use the input, and please don’t take it as a harsh critique – that was not my intention.

    Best Regards
    Morten Andersen, Denmark

    • > Sorry for the late comments. I hope they are still useful.

      Absolutely! Thanks for looking at it.

      > a) You use the email as “small” per user pseudo-salt, which will of course partially protect against somebody generating a set of “almost”-there rainbow tables with “stretchedPW” values. But the entropy of email addresses are really low I would assume (especially since the email market is dominated by a few extremely large domains – I assume that you have large portion of GMail, Hotmail and Yahoo mail addresses), so I do not think it lives up to normal randomness requirements to a good salt.

      Hm, I think “salts must be random” is a common misconception about what salts are for. Salts make guessing attacks more expensive by denying the attacker the ability to share their work among multiple accounts. Attacking ten accounts should cost ten times as much computation as attacking one account. Without salt, they could test each guess against the server-side verifier for every account, then test the second guess against all accounts, etc.

      So the way I see it, the salt value must merely be unique. In fact, it could simply be *mostly* unique: if 90% of the accounts had unique salts, and the other 10% shared a salt with an account in the first set, then most salts would get you just one account, and a couple would get you two. The attacker’s work factor would still be pretty close to what completely unique salts would get you.

      Using a random generator to create salts is an easy stateless way to get unique values, if you make them long enough and your RNG isn’t broken. But the notion of “entropy” only applies to secrets, and the salt is necessarily revealed to anyone upon request (after all, the legitimate client must know the salt to prove to us that they’re legitimate, so it’s not as if we can hide it from an attacker).

      Also, if two similar-looking salts (and the same password) result in similar-looking keys, then the KDF is broken. So even if two salts differ in just a single bit, that still doubles the attacker’s work.

      > But I am not really fond of the salt part only being part of the final HKDF. I of course understand your arguments about being able to run the password stretching in parallel, and you do note that it has a “tiny cost in security”.

      Yeah, I wasn’t able to figure out a good way to explain what that “tiny cost” is. It shows up when you change your password, which creates a new set of salts. If you think of the old password and the new password as separate accounts, then ideally an attacker trying to guess either one should have to spend twice as much CPU time as if they were attacking just one of them. Because the salts that changed are folded in *after* the stretch, the attacker doesn’t spend twice as much: for each guess, they do the stretch, then fold in one salt and compare it to their oracle, then they rewind to the intermediate stretch output and fold in the other salt and compare.

      But I realized that it isn’t actually interesting to attack both old and new passwords at the same time, since guessing the old password doesn’t win you anything. So we don’t really need to make this expensive, and delaying the salt until after the stretch enables that parallelism speedup.

      > I suggest to stay on the beaten path and having the mainSalt as input into the initial KDF. This could be optimized in most cases by just using the statefullness of the browser to store the salt in a client side cookie.

      Ah, see, if we could store the salt, we could also store the derived keys and session tokens. In fact that’s what we’ll do: you sign in once, and your browser retains everything it needs to keep syncing until you explicitly revoke access.

      In other words, the “cold start” case is the only interesting one, because you only sign-in once per browser (until you explicitly sign out of something, but I don’t think that will happen frequently enough to justify slowing down the initial sign-in case).

      > Instead of the /auth/start key stretching parallelism critized above, they client could do parallel calculation of A if I am right.

      I think that’s right, “A” can be computed without knowing any salts. You do have to know the group modulus (N) and the group generator (g), but those will probably be baked in. I don’t think that wins you much, though, since computing “A” should be way faster than key-stretching (it’s just math, as opposed to intentionally-slow math).

      > You uses two different salts (mainSalt and srpSalt).

      > But with the number of users you estimate for the system I assume that it makes quite some difference to store, handle, transmit to the client and process an additional unnecessary blob of salt data for each user.

      Switching from two salts to one would save us about 32 bytes per account. Since we’re already storing the 2048-bit (256-byte) SRP Verifier for each one, in addition to some other values (kA, wrapkB), I think it’d only save us about 5% (and that’s before considering the all the actual bookmark/tab/etc Sync data we’ll be storing, which is closer to megabytes). My first design didn’t bother with a distinct salt here, since it wasn’t necessary, but my coworkers convinced me that it was worth spending that 5% (and the specification complexity to define it) to make people feel better about the protocol.

      > 5) Good question you raises on how the “a” and “b” should be generated in the SRP Notes section. Unfortunately I am unable to answer it :-( (but I would like to know the answer).

      I’ve had a couple people tell me that [1..2^256-1] is fine. I was assuming it should be [1..2^2048-1] (more specifically [1..N-1]), but I haven’t been able to find a reference to support that. My plan is to go with [1..2^256-1] and continue searching the literature for a concrete recommendation (and justification).

      > 6) Missing check for minimum size of random numbers a and b
      > a, b > log_g (N)
      > I cannot see these checks in your flow diagrams.

      Nope, those checks aren’t there. I haven’t yet decided what to think about their advice here.. I think that same document recommends testing “u” to make sure it isn’t 0. But “u” is the output of a hash function, which means the chance that the test would be exercised is 2^-256, significantly below the hardware failure rate of the computers that will run this.

      For our values of “g” and “N”, the threshold will be about 2048. The chances of a 256-bit number being less than 2048 are 2^-248, which is also stupendously low. If an American (http://what-if.xkcd.com/2/) signed in once per day, then for every time their log_g(N) check failed, they could expect to be hit by lightning about 2.0e66 times first.

      (some of my opinions are formed by things like page 7 of http://ed25519.cr.yp.to/ed25519-20110926.pdf , the “weak keys” section, where djb argues against checking for bad events that are actually rare).

      Adding code which never gets exercised is risky all by itself (since it’s unlikely to be tested very well), especially if the attacker gets to control if/when it gets executed.

      But yeah, good thing to keep in mind.. I’ll talk to some more folks about it and see if there’d be a good/safe way to handle the error, and whether these checks should be included.

      > or whether they should actually be strictly larger than 2^256.

      It would probably mean being strictly larger than 2048, since our generator is “2″, to make sure that g^a wraps around at least once. One way to do that would be to pick “a” from the range [2049..2^256-1]. Another way would be to pick a random 256-bit number and then just set the high-order bit to one (giving you 255 bits of entropy, but always in the right range).

      > 7) SRP resistance toward small subgroup attacks
      > as far as I can see it explicitly states that the described attack is not a problem in the practical world.

      Yeah, the most important thing is to make sure the A and B values are actual group elements. That’s what enables these attacks. I believe it’s less interesting in the Zp field (1..p-1) where you’ve got a generator that covers the entire number space, because every integer in that range is a group element. In that case, it’s enough to just rule out 0 (or other multiple of N).

      As an example of the importance of this check, if A or B is 0, then all the security goes out the window (we just fixed this in https://github.com/mozilla/node-srp).

      > 8) Protection against discovery/harvesting of mail addresses/user accounts.

      Good question. On one hand, we don’t want attackers spamming the keyserver with email addresses to find out which ones are in use or not. On the other, returning a random/fake answer to non-existent accounts makes the user flow a lot harder: maybe they used the “sign in” box instead of the “sign up” box, or maybe they typed their email address wrong. A fake answer prevents the client from distinguishing these cases, which means we can’t give useful feedback to the user, making them think they typed their password wrong: very frustrating.

      I suspect the best we can do is to rate-limit queries from specific IP addresses, and pay attention to the overall bad-query rate to try and detect botnet-based attacks. Maybe add some rate-limiting or proof-of-work into the protocol for use when we’re under attack.

      > 9) Have you considered contacting Thomas Wu (the SRP inventor) for a review of the PiCL crypto specification.

      Oh, that’s a great idea, thanks! I’ll reach out to him for his thoughts.

      Thanks a bunch!
      -Brian

  6. Morten Andersen

    Hi Brian,

    Good counter arguments to all the minor questions I raised. You have convinced me on all but the salt issue (sorry for pushing that even further).

    You are of course right that one reason for salts is to stop the “economy of scale” in cracking a lot of passwords from a stolen password database.

    But I believe the randomness part is there also for a good reason. Although it is clear that the salt is not secret it still increases the randomness of the input to the hash function in a way that stops rainbow/lookup table attacks. A thing which a short non-random salt will not do. The problem is described much more clear than I am able to do under “Short Salt” here: https://crackstation.net/hashing-security.htm#ineffective

    Best Regards
    Morten

  7. […] so the data on the server was no good to them. I lost the link to Brian’s full slides but his blog post is here with many details about the new SRP based design. I thought it was especially noteworthy that most […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>