sha-512 follow-up and thank you

Chris Lyon


I made a statement in my previous post, SHA-512 w/ per Users Salts about a “significant hit rate” when it comes to dictionary attacking hashes. This significant hit rate is what we are scared of because we feel that not many people really know the ease of dictionary attacking the hashes, even if you have a large salt. It should be known that hashes alone are not meant to secure passwords. Additional steps are required such as increased iterations and salts are necessary to increase the cost of both offline brute force attacks and pre-computed tables. (rainbow tables) As I pointed out in my last post, most applications store the salt with the hash.

So, on my quest to prove a point and to prove how easy it is to dictionary hashes, I designed a system where we could perform dictionary attacks but under the strictest security possible. I didn’t want to use a public cloud nor did I want to know the passwords. My first goal with this project was to get two metrics, how fast could I dictionary 1 million hashes and what would be the hit rate.

The System

My first mission was to get a few systems for testing and since there was plenty of old desktops and Mac Book Pros around the office, I grabbed a few of these and started building my own. The first task was to build a client/server app that got the hash from the master database and then past the hash to the worker. Once that was done, the local server had to have a database for metrics and to keep timing and hit rates. The API that I wrote between the client and server was pretty simple, auth the request, request a hash, and ack the client “got the hash.” The client also needs to be multi-threaded, which is pretty simple at this point. When the worker was completed with a hash, send a true/false for the ability to dictionary the password and how long did it take.

Once this was built, and I am over simplifying the how in this post, I started testing against sample hashes to get an idea of scale. I started off with just three worker machines, all over 1 1/2 years old. I found that I could get an answer on any given hash under 4 seconds. The dictionary that I am using is my own dictionary, something I won’t release out to the public (yet) but I will say, it has 400,000 entries. I do have a more complete dictionary that is over 10 million entries, but it takes some significant time to process this dictionary but has a much higher hit rate.


When I got the system tested and working, I was able to unleash it on 1 millions hashes. The results were pretty surprising in that I was able to completely process 1 million hashes in under 18 hours using just three older machines and get a 20% password hit. I did want to stress that we didn’t record the password, just if we got a match.

Imagine if I had more machines or even used ec2, I could cut that time down significantly. This is the biggest reason we are moving away from sha-512 and moving towards hmac with bcrypt.

Thank You

On a personal note, I did want to give one last “Thank you” to all the people in the community who I have had a chance to work with over the past 2 years. As many of you know, Friday June 3rd will be my last day at Mozilla as I am moving on to new challenges. The infrastructure security group wasn’t here when I started and I’m proud to say that it is now starting to put its feet down and establish itself as a “security enabler” for Mozilla and the community. The team that I am leaving behind is nothing short of top notch and will continue to be security enablers.

Once again, it has been a great ride and thank you all for your support.

Chris Lyon
Director of Infrastructure Security (Until June 3rd)
twitter: @cslyon

3 responses

  1. Mardeg wrote on :

    Something interesting published today about easy-to-remember password padding to defeat brute-forcing as an alternative to completely random passwords is in my name’s link.

  2. Joris wrote on :

    What is the advantage of using bcrypt for passwords over a sha-512 hash, that hashes itself a few thousand times? (using the salt each iteration)
    10000 iterations take about 0.2 seconds on my machine.

  3. Michael Coates wrote on :


    By design, hashing algorithms are built for speed. Although you can slow down the overall processing time by repeating the hashing algorithm, you’re putting yourself in a tough spot moving forward. As processing power continues to increase the time required to process multiple iterations of the hash will quickly decrease.

    Bcrypt was designed to include an easy configuration option that can allows us to increase the number of iterations that are performed – essentially allowing us to increase computational requirements whenever desired. Also bcrypt uses the encryption algorithm blowfish, which is not terribly slow in itself, but is also not a hashing algorithm that was designed with maximizing speed in mind.

    Multiple sha-512 iterations is better than a single sha-512 hash; however, we feel that bcrypt is a more flexible solution to better accomplish our goal.