Categories: Musings Security

Measure What Matters – The SEC Essentials

People want to know that they are safe when they browse the web. There are important differences between browsers when it comes to security, and so it’s no surprise to see a growing number of groups out there attempting to compare browsers based on their security record. That’s great news; not only does it help inform users, but it also lets browser authors know where they stand, and where they can improve.

The thing to watch when you’re measuring software security, though, is that you’re measuring the things that matter. We’ve talked about this before, but it bears repeating: if you measure the wrong things, you encourage vendors to game the system instead of actually making things better.

What Makes A Good Security Metric?

There isn’t a single statistic you can gather that will give you a complete picture of security. Any robust security metrics model will need to take into account multiple factors. Nevertheless, there are 3 essential elements that should underlie any well-designed model. We call them the SEC essentials:

Severity : A good measurement model will put more emphasis on severe, automatically exploitable bugs than it does on nuisance bugs or ones that require users to cooperate extensively with their attacker. Measuring severity encourages vendors to fix the right bugs first, not to pad their numbers with minor fixes while major vulnerabilities languish.

Exposure Window : It’s not very informative to count the absolute number of bugs but it is very important to know how long each bug puts users at risk. Measuring exposure window encourages vendors to fix holes quickly, and to get those fixes out to users.

Complete Disclosure : The other measurements you compile are almost meaningless if you can’t see all the fixed bugs. Some vendors only disclose flaws found by outside sources, concealing those discovered by their internal security teams to keep their bug numbers down. Measuring only externally-discovered vulnerabilities rewards vendors who are purely reactive and, worse, it fails to credit vendors who develop strong internal security teams. Those teams often find the majority of security bugs; it’s important that any security metric recognizes and encourages it.

What’s The Solution?

If it were easy to find a calculation that included all of this information in a universal way, we’d be using it. When we wrote about our metrics project last year, it was with the aim to develop these ideas, and to change the tone of the discussion.

If the work there has taught us anything, it is that this will not happen overnight. The first step, though, is being clear about what we should expect from any assessment of security. If it doesn’t focus on the three SEC essentials: Severity, Exposure Window, and Complete Disclosure, ask yourself why not. And then ask the people doing the measuring.

Johnathan Nightingale
Human Shield

13 comments on “Measure What Matters – The SEC Essentials”

  1. Josh wrote on

    Even the things you point out aren’t very cut and dry. Exposure Window, for example, is a very mercurial quantity. Is it better to rush a mitigating patch now that makes it harder, but still possible to exploit, or better to wait for a complete fix?

    Should you focus on getting a patch for that particular problem done quickly, or should you wait until you can do a root cause analysis and fix similar flaws (its a well established anecdote that if you made the mistake once, you likely made it more than once)? Sure, you can protect the customer from one specific instance of that flaw, but if it exists elsewhere the black hats now know what to look for and will likely find the other flaws very quickly. Is it better to pump out many small fixes rather than one thorough, but slower, fix? Aren’t you increasing the odds of customers being behind the patch curve?

    Does knowing that I was only exposed for two days mean I should feel better as the customer? At what point do you focus so much on time to patch that you are skimping on the software process, delivering poorer quality, less tested code?

    And all of this comes down to the problem that you are looking for a metric, a magic number, to make a comparison. The reason that no one has found a good metric is because it is a fairy tale. Having done *many* vendor evaluations I don’t focus on their patch count and time to fix (though I do look at those numbers since some info can be inferred from them). I ask them about their secure development lifecycle, their response plans, and how they have baked in layers of mitigations so that when a flaw is found the severity is lower. I want to know how they limit vulnerabilities up front, how they limit the impact by secure design when a vulnerability is found (which does touch on your severity, but I don’t rely on patch info to demonstrate that), and how they responsibly respond to the vulnerability (which does touch on exposure window and disclosure- but I would far rather hear the words “Root Cause Analysis” rather than “fixed in 12 hours” in the discussion). I don’t look for some magic metric, I look for a mature and thought out security posture.

  2. Andy Steingruebl wrote on


    Who is the audience for this metric and what is its purpose?

    These metrics or factors make sense for disclosed vulnerabilities, but aren’t very meaningful for defects in code that hasn’t been released.

    I understand the battle going back and forth between Mozilla and Microsoft about which browser had the shorter exposure window, etc. This of course doesn’t tell us much about non-public vulnerabilities, nor does it tell us much about the threat landscape and how likely a user is with a given browser at a given patch/defense level is to get compromised.

    If you can, please share your thoughts on what you’re trying to measure.


  3. Jesse Ruderman wrote on

    This is a good framework for analyzing the past, but when you’re choosing a product, you really want to predict future exposure. Combine past experience (SEC) with:

    * “U” for updates. How quickly do we actually get users updated? (This can be impacted negatively by rushing out a busted security fix!)

    * “R” for root cause analysis. Did we fix multiple levels of badness (five whys) or just spot-fix the first wrongness we saw?

    * “I” for internal testing. Are we competent at finding bugs on our own, e.g. through fuzz-testing and defensive static analysis? Do we tweak our testing appropriately based on external finds?

    * “T” for transparency. Do security researchers understand any delays in security fixes, so they won’t feel tempted to go straight to public disclosure next time they find a flaw?

    … and you have “SECURIT”. Anyone want to suggest a “Y”? 😉

  4. Josh wrote on


    “Y”ear over Year improvement?

  5. gandalf wrote on

    Y – “You attitude” – bugs that are putting me at risk should be more important -> bugs that are exploitable on SPARC machines or Amiga should be less exposed than those for Windows XP

  6. B.J. Herbison wrote on

    I find Mozilla lacking in the disclosure area. In particular, take a look at — the last two Firefox security patch releases aren’t even mentioned. The top of the page says “we understand the importance of security”, but the lack of updates says “we don’t think security is very important”.

    (I’ve reported lags in updating that page many time over several years. It’s hard to find contacts, or at least contacts that will respond to e-mail. The page needs someone to take ownership.)

  7. Tristan wrote on

    Hey Johnathan, I’ve whipped up a translation in French of this post. It’s located here:

    You’ve written an excellent post that deserves more eyeballs!

  8. Tom wrote on

    If your security is so good, how come I can’t stop google analytics, omniture and the other ad surveys from popping up on my computer when I am in Firefox? I don’t have this happen with Safari.

  9. Pseudonymous Coward wrote on

    In the light of the recent NoScript/Adblock Plus controversy, I think Mozilla Security should focus its attention on the questionable security model of its add-ons mechanism. Suggestions:

    1. A strong Javascript sandbox.
    2. Why on earth do extensions have such raw power in Firefox? We need a strong add-ons sandbox too.

  10. Fill wrote on

    I use NoScript and FlashBlock. I refused to AdBlock plus because this addon led to crash Firefox. May be he not working only with me =) But addon very cool.

  11. Bill Mitchell wrote on

    Can I remove IE from my system? I run latest Firefox/w, XP Pro SP3. Still get MS updates? Going blind so need Zoom Text, built for IE. I got it to work with Firefox! A vet hacking my way in the dark. Still have a need for speed. Thanks for your valued time.

  12. Paul_Bags wrote on

    I’ve been getting constant trojans for the last month or so while browsing with firefox. It is the only program running at the time, and the only possible source of these instances of malicious code running on my machine. It occurs after restoring from backup hard drive images, as well as complete, clean, reinstalls.

    In my eyes both the quality and security of firefox has been declining for a long time, and I am seriously considering ditching it for something else. However I still prefer the firefox interface, I’m comfortable with it, and I hold out hope for a return to the brilliance, stability, and security once inherent to firefox.

  13. Daniel Veditz wrote on

    Paul: sorry you keep getting infected, but are you sure it’s Firefox? We are aware of no “in-the-wild” exploits that affect recent versions of Firefox 3, and it’s not just our small team looking, we are also contacted by internet security firms and researchers when they come across new attacks. What we do see is a lot of attacks on old plugins since their ubiquity makes them profitable multi-browser targets.

    Please make sure your plugins are updated to the most recent version from their respective vendors and see if that helps. And don’t forget to check external document viewers like Adobe Reader and Microsoft Word.

    If you’re still getting infected after that I would love to get a copy of your browser history if you’re willing to share it. If so contact us at