Evolution of Software Security – Predictions for 2020

Attackers will become increasingly more efficient at discovering & exploiting vulnerabilities, even as application developers continue to try to reduce the attack surface. This has several implications:

  • Attackers will depend less on random manual testing to find vulnerabilities.  Instead, attackers will find new and lazy yet creative ways of discovering vulnerabilities such as mining public crash reports, bug repos and other public sources of information for clues to potential issues, spearphishing of developers, corporations and other individuals with sensitive security information (to steal security bug information or gain/elevate access privileges to source code) and utilizing off the shelf security software to analyze potential targets.  Call it laziness; I call it efficiency.
  • Attackers will be come increasingly more efficient at deploying exploits, putting serious pressure on software vendors to compress release and update cycles.  You are already seeing the acceptable window of updates shrinking from 30 days or more down to 24 hours, but that will come under severe pressure.  By 2020 I expect acceptable update windows will be measured in an hour or two, and likely even minutes for high profile target applications.
  • Focus away from bug counting as a useful metric, towards actual exposure risk.  Something like number of open security bugs multiplied by average window of time from bug discovery to when the fix has been deployed to 80% of the user base (just a hypothetical example, the real metrics will likely be more complicated).  This would require vendors to agree on common metrics and severity ratings, and become far more transparent and willing to share more information than they have been thus far, so perhaps its not a particularly realistic prediction. :)
  • Software companies will hopefully become more effective at putting security into context with other business objectives.  While this seems like an obvious thing to do, too many companies treat security as practically an aspect of PR, rather than serious engineering work that requires tradeoffs in other areas of product development.
  • Valuable information will continue migrating up the stack; so will valuable exploits.  Much has been made of process isolation / sandboxing technologies, and they do help.  However, increasingly as more critical information is stored on the web than local systems, exploits that are executed with just “content” privileges (i.e. the context they run within has access to network and credentials/cookies but not filesystems or other critical OS resources) will be considered “good enough.”  Expect to see more investment in exploit frameworks that focus on weaponizing information-stealing exploits that run within limited privilege processes.
  • Fuzzing will becoming an increasingly commoditized technology & skill-set, so software vendors should not become complacent and assume technical superiority.
  • Software companies that rely on “checklist security” processes and talking heads rather than deep technical security competence will suffer terribly as the sophistication of attackers ramps up, and their internal processes and teams cannot keep pace.
  • Deployment of exploits will become sophisticated to the point attackers that will have a quiver of exploits that they will selectively deploy against specific application versions, only serving them against high-value targets.   This means software vendors need to fix issues quickly, as they cannot afford to sit on bugs they know about as the first indication that they have been externally discovered will likely be when they are used in a high-profile, targeted attack.
  • This will increase the value of zero day exploits, as they provide first-mover advantage against sophisticated and well-defended targets. These exploits will rarely be wasted on the more common, “shotgun” exploit economy out there that shoots at anything that moves (for purposes of building botnets for fun and profit, stealing email and WoW accounts, individual bank accounts, etc).  That latter “exploit mass market” will focus increasingly on high volume exploitation of known issues in applications and platforms with slow update uptake rates, while niche players will focus on zero days for international and corporate espionage.

The above pontifications are purely my own opinions and are likely neither representative of nor shared by others.

This entry was posted in General Security. Bookmark the permalink.

2 Responses to Evolution of Software Security – Predictions for 2020

  1. Lucas Adamski says:

    I don’t think bug counting has ever been a good metric, but its the one most of the comparative studies have focused on unfortunately. Even well known security research companies still fall into this trap.

    There are definitely a number of vulnerability classification models out there. I’m concerned about the degree of the complexity and subjectivity inherent in most of them. One issue is precision, which is to say if a set of different people rate a given vulnerability will they all come up with the same (or at least similar) rating. The more complex and subjective the rating model is the lower the precision IMHO.

    In my experience trying to rate discoverability and exploitability is at best a discussion rathole and at worst plain wrong. In essence you are trying to divine the motivation level and technical abilities of an unknown set of potential attackers. The Flash Player / Mark Dowd issue (CVE-2007-0071) a few years back is the textbook example… a reasonable individual would have rated discoverability as low and exploitability as highly unlikely. It only took one smart and motivated guy to prove that wrong and the result was one of the most widespread Flash Player exploits we have seen.

    At Mozilla we use the following bug ratings: https://wiki.mozilla.org/Security_Severity_Ratings

    The ratings are fairly simple but they have the valuable property that anyone who understands the bug can usually assign it an appropriate severity rating.

    But these are totally different issues from metrics. I think the best approach is to focus on the actual risk presented to users, which is a function of time to fix, time to update, and probably a few other inputs. As you mentioned, its important that metrics do not reward negative behaviors like hiding bugs, sitting on fixes, or waiting for obsolescence.

  2. “Focus away from bug counting as a useful metric, towards actual exposure risk. Something like number of open security bugs multiplied by average window of time from bug discovery to when the fix has been deployed to 80% of the user base [...]”

    Has bug counting ever been a useful metric, except in measuring the efficiency of your tools and processes? Unfortunately none of these metrics are ever public, except in open source. You will not ever know the number of open security bugs, especially as more and more of those are found internally and not by third parties, and therefore never get reported publicly before final patch release (if even then).

    The other risk exposure metrics on the other hand will become one of the most important metrics. Vendors are already using these to prioritize the order of fixing bugs, especially if they are new to fuzzing and suddenly are faced with hundreds and hundreds of bugs in their product. I am not sure how useful these are externally outside the product security teams. To me, any metric that can give an excuse to not install a security fix is a bad metric. And that is how they would be used.

    Some fuzzers (and static analyzers) already push these metrics to software vendors. Several projects e.g. by Mitre and Cigital are building new metrics like these. We have a small (but rather old) news item on this here:

    http://www.codenomicon.com/news/news/2009-09-16.shtml

    The latest metric was covered here in our newsletter:

    http://www.codenomicon.com/news/newsletter/archive/2009-12.shtml

    Also, there should be something new coming out on this topic rather shortly.

Comments are closed.