Tag Archive for: Heartbleed

The Bleed Goes On

Some people are surprised that Heartbleed is still out there, 3 years on, as you can read here. What this illustrates is two important truths of security, depending on whether you see the glass half full or half empty.

One perspective is that, once again, we know what to do, but failed to do it.  Heartbleed is well understood, and directly patchable.  Why haven’t we eradicated this by now? The problem is that the Internet is big. Calling the Internet an “organization” would be a stretch – it’s larger, more diverse, and harder to control than any one organization.  But if you’ve tried to manage vulnerabilities at any normal organization – even a global scale one – you have a pretty good idea how hard it gets to eliminate any one thing. It’s like Zeno’s Paradox – when you try to eradicate any one problem you choose, you can fix half the instances in a short period of time. The trouble is that it takes about that long again to fix the next half of what remains, and that amount again for the half after that. Once you’ve dealt with the easy stuff – well known machines, with well documented purpose, and a friendly owner in IT – it starts to get hard fast, for an array of reasons from the political to the technical.  You can reduce the prevalence of a problem really quickly, but to eradicate it takes near-infinite time.  And the problem, of course, is that attackers will find whatever you miss – they can use automation to track down every defect.  (That’s how researchers found there is still a lot of Heartbleed out there.)  Any one time you miss might open up access to far more important parts of your organization.  It’s a chilling prospect, and it’s fundamental to the unfair fight in security – attackers only need one way in, defenders need to cover all possible paths.

To flip to the positive perspective, perhaps the remaining Heartbleed instances are not important – that is, it’s possible that we prioritized well as a community, and only left the unimportant instances dangling for all this time.  I know first-hand that major banks and critical infrastructure companies scrambled to stamp out Heartbleed from their critical servers as fast as they could – it was impressive.  So perhaps we fixed the most important gaps first, and left until later any assets that are too hard to reach, or better yet, have no useful access to anything else after they are compromised.  This would be great if it were true.  The question is, how would we know?

The answer is obvious – we’d need to assess each instance, in context, to understand which instances must get fixed, and which can be deferred until later, or perhaps until after we move on to the next fire drill, and the fire drill after that. The security game is a never-ending arms race, and so we always have to be responsive and dynamic as the rules of the game change.  So how would we ever know if the stuff we deferred from last quarter’s crises is more important or less important than this quarter’s?  Only automated prioritization of all your defensive gaps can tell you.

Project Zero – A Smarter Way Forward

Google’s move to set up Project Zero is very welcome.  The infrastructure on which we run our businesses and our lives is showing its fragile nature as each new, successful attack is disclosed.  green-arrowUnfortunately, we all share significant risks, not least because IT tends towards “monoculture”, with only a few major pieces of hardware and software being used most of the time.  Organizations use the common equipment because it’s cheaper, because it’s better understood by staff, and because we all tend to do what we see our neighbors doing.  These upsides come at a cost, though – it means attackers can find a single defect, and it can open thousands or even millions of doors, as we recently saw with Heartbleed.  This situation isn’t likely to change soon, so it’s welcome news whenever there are more eyes on the problem, trying to find and disclose defects before attackers do.

Attacks proliferate rapidly – very rapidly, in a quite robust market for newly found, highly effective vulnerabilities.  As they do so, it has become crystal clear that traditional passive, reactive methods of defense are insufficient. Google’s investment underscores the critical importance of proactive analysis of potential attack vectors. Any organization that is not developing a set of defenses from proactive analysis through reactive defenses is leaving the door open to attacks. Defenders need ways to automate – to pick up all the discoveries as they are found by the “good guys”, so they can assess their own risk and keep up with remediation. Recent incidents like Code Spaces and Target make clear that the health of enterprises and the careers of their executives are at stake; just expecting defenses to hold without some way to automate validation is not tenable.  Hope is not a strategy.