People want to know that they are safe when they browse the web. There are important differences between browsers when it comes to security, and so it’s no surprise to see a growing number of groups out there attempting to compare browsers based on their security record. That’s great news; not only does it help inform users, but it also lets browser authors know where they stand, and where they can improve.
The thing to watch when you’re measuring software security, though, is that you’re measuring the things that matter. We’ve talked about this before, but it bears repeating: if you measure the wrong things, you encourage vendors to game the system instead of actually making things better.
What Makes A Good Security Metric?
There isn’t a single statistic you can gather that will give you a complete picture of security. Any robust security metrics model will need to take into account multiple factors. Nevertheless, there are 3 essential elements that should underlie any well-designed model. We call them the SEC essentials:
Severity : A good measurement model will put more emphasis on severe, automatically exploitable bugs than it does on nuisance bugs or ones that require users to cooperate extensively with their attacker. Measuring severity encourages vendors to fix the right bugs first, not to pad their numbers with minor fixes while major vulnerabilities languish.
Exposure Window : It’s not very informative to count the absolute number of bugs but it is very important to know how long each bug puts users at risk. Measuring exposure window encourages vendors to fix holes quickly, and to get those fixes out to users.
Complete Disclosure : The other measurements you compile are almost meaningless if you can’t see all the fixed bugs. Some vendors only disclose flaws found by outside sources, concealing those discovered by their internal security teams to keep their bug numbers down. Measuring only externally-discovered vulnerabilities rewards vendors who are purely reactive and, worse, it fails to credit vendors who develop strong internal security teams. Those teams often find the majority of security bugs; it’s important that any security metric recognizes and encourages it.
What’s The Solution?
If it were easy to find a calculation that included all of this information in a universal way, we’d be using it. When we wrote about our metrics project last year, it was with the aim to develop these ideas, and to change the tone of the discussion.
If the work there has taught us anything, it is that this will not happen overnight. The first step, though, is being clear about what we should expect from any assessment of security. If it doesn’t focus on the three SEC essentials: Severity, Exposure Window, and Complete Disclosure, ask yourself why not. And then ask the people doing the measuring.
Josh wrote on
Andy Steingruebl wrote on
Jesse Ruderman wrote on
Josh wrote on
gandalf wrote on
B.J. Herbison wrote on
Tristan wrote on
Tom wrote on
Pseudonymous Coward wrote on
Fill wrote on
Bill Mitchell wrote on
Paul_Bags wrote on
Daniel Veditz wrote on