Micro-Segmentation: Good or bad?

COMPUTING | 27 September 2016

Mike Lloyd, CTO at RedSeal, argues that more granular control is a good thing, but it’s easier said than done.

There’s a lot going on in virtual data centres. In security, we’re hearing many variations of the term “micro-segmentation”. (It originated from VMware, but has been adopted by other players, some of them adding spin.)

Yahoo-Verizon Deal May Be Complicated By Historic Hack

CNBC | September 22, 2016

Yahoo faces fallout from lawmakers, users and even Verizon following what could be the biggest data breach in history.

Micro-Segmentation: Good or Bad?

There’s a lot going on in virtual data centers. In security, we’re hearing many variations of the term “micro-segmentation.” (It originated from VMWare, but has been adopted by other players, some of them adding top-spin or over-spin.)

We know what segmentation is. Every enterprise network practices segmentation between outside and inside, at least. Most aim to have a degree of internal segmentation, but I see a lot more planning than doing — unless an audit is on the line. Many networks have a degree of segmentation around the assets that auditors pay attention to, such as patient records and credit cards. There are organizations further up the security sophistication curve who have a solid zone-based division of their business, can articulate what each zone does and what should go on between them, and have a degree – at least some degree – of enforcement of inter-zone access. But these tend to be large, complex companies, so each zone tends to be quite large. It’s simple math – if you try to track N zones, you have to think about N2 different relationships. That number goes up fast. Even well-staffed teams struggle to keep up with just a dozen major zones in a single network. That may not sound like a lot, but the typical access open between any two zones can easily exceed half a million communicating pairs. Auditing even one of those in full depth is a super-human feat.

Now along comes the two horses pulling today’s IT chariot: the virtual data center and the software defined network. These offer more segmentation, with finer control, all the way down to the workload (or even lower, depending on which marketing teams you believe). This sounds great – who wouldn’t want super-fine controls?  Nobody believes the perimeter-only model is working out any more, so more control must be better, right?  But in practice, if you just throw this technology onto the existing stack without a plan for scaling, it’s not going to work out.

If you start with a hard-to-manage, complex management challenge, and you respond by breaking it into ever smaller pieces, spread out in more places, you can rapidly end up like Mickey Mouse in The Sorcerer’s Apprentice, madly splitting brooms until he’s overrun.

Is it hopeless? Certainly not. The issue is scale. More segmentation, in faster-moving infrastructure, takes a problem that was already tough for human teams and makes it harder. But this happens to be the kind of problem that computers are very good at. The trick is to realize that you need to separate the objective – what you want to allow in your network – from the implementation, whether that’s a legacy firewall or a fancy new GUI for managing policy for virtual workloads. (In the real world, that’s not an either/or – it’s a both, since you have to coordinate your virtual workload protections with your wider network, which stubbornly refuses to go away just because it’s not software defined.)

That is, if you can describe what you want your network to do, you can get a big win.  Just separate your goals from the specific implementation – record the intention in general terms, for example, in the zone-to-zone relationships of the units of your business. Then you can use automation software to check that this is actually what the network is set up to do.  Computers don’t get tired – they just don’t know enough about your business or your adversaries to write the rules for you. (I wouldn’t trust software to figure out how an organism like a business works, and I certainly wouldn’t expect it to out-fox an adversary. If we can’t even make software to beat a Turing Test, how could an algorithm understand social engineering – still a mainstay of modern malware?)

So I’m not saying micro-segmentation is a bad thing. That’s a bit like asking whether water is a bad thing – used correctly, it’s great, but it’s important not to drown. Here, learning to swim isn’t about the latest silver bullet feature of a competitive security offering – it’s about figuring out how all your infrastructure works together, and whether it’s giving the business what’s needed without exposing too much attack surface.

Financial Sector Offers Model for Cybersecurity Sharing

SIGNAL | September 15, 2016

By J. Wayne Lloyd

When it comes to cybersecurity, I have heard many people express consternation and wonderment as to why the government cannot protect the Internet. It boils down to two things: No authorization, and officials only have visibility into a scant number of networks under their control.

To Maintain Democracy, Digital Networks Must Be Improved

ThirdCertainty | September 13, 2016

Automation, segmentation and continuous oversight of voting systems will strengthen trust in government

By Ray Rothrock, RedSeal CEO

As the presidential election enters its home stretch, the Democratic National Convention cyber hack and issues with local voting machines have made cybersecurity part of the election story. After the election, I fully expect an accusation from the loser about electronic voter fraud, which will cast doubt on the most important element in any election: Trust.

Can Cybersecurity Save the November Elections?

CSO | September 6, 2016

The Federal Bureau of Investigation’s disclosure earlier this month that foreign hackers had infiltrated voter registration systems in Illinois and Arizona came as no surprise to some cybersecurity experts.

Hol(e)y Routers, Batman!

Most people think about network infrastructure about as much as they think about plumbing – which is to say, not at all, until something really unfortunate happens. That’s what puts the “infra” in the infrastructure – we want it out of sight, out of mind, and ideally mostly below ground. We pay more attention to our computing machinery, because we use them directly to do business, to be sociable, or for entertainment. All of these uses depend critically on the network, but that doesn’t mean most of us want to think about the network, itself.

That’s why SEC Consult’s research into exploitable routers probably won’t get the attention it deserves. That’s a pity – it’s a rich and worthwhile piece of work. It’s also the shape of things to come, as we move into the Internet of Things. (I had a great conversation a little while ago with some fire suppression engineers who are increasingly aware of cyber issues – we were amused by the concept of The Internet of Things That Are on Fire.)

In a nutshell, the good folks at SEC Consult searched the Internet for objects with a particular kind of broken cryptography – specifically, with known private keys. This is equivalent to having nice, shiny locks visible on all your doors, but all of them lacking deadbolts. It sure looks like you’re secure, but there’s nothing stopping someone simply opening the doors up. (At a minimum, the flaw they looked for makes it really easy to snoop on encrypted traffic, but depending on context, can also allow masquerading and logging in to control the device.)

And what did they find when they twisted doorknobs? Well, if you’ve read this far, you won’t be surprised that they uncovered several million objects with easily decrypted cryptography.  Interestingly, they were primarily those infrastructure devices we prefer to forget about.  Coincidence? Probably not. The more we ignore devices, the messier they tend to get. That’s one of the scarier points about the Internet of Things – once we have millions or billions of online objects, who will take care of patching them? (Can they be updated? Is the manufacturer responsible? What if the manufacturer has gone out of business?)

But what really puts the icing onto the SEC Consult cake is that they tried hard to report, advertise, and publicize everything they found in late 2015. They pushed vendors; they worked with CERT teams; they made noise. All of this, of course, was an attempt to get things to improve. And what did they find when they went back to scan again? A 40% increase in devices with broken crypto! (To put the cherry onto that icing, the most common device type they reported before has indeed tended to disappear. Like cockroaches, if you kill just one, you’re likely to find more when you look again.)

So what are we to conclude? We may wish our infrastructure could be started up and forgotten, but it can’t be. It’s weak, it’s got mistakes in it, and we are continuously finding new vulnerabilities. One key take-away about these router vulnerabilities: we should never expose management interfaces. That sounds too trivial to even mention – who would knowingly do such a thing?  But people unknowingly do it, and only find out when the fan gets hit. When researchers look (and it gets ever easier to automate an Internet-wide search), they find millions of items that violate even basic, well-understood practices. How can you tell if your infrastructure has these mistakes? I’m not saying a typical enterprise network is all built out of low-end routers with broken crypto on them. But the lessons from this research very much apply to networks of all sizes. If you don’t harden and control access to your infrastructure, your infrastructure can fail (or be made to fail), and that’s not just smelly – it’s a direct loss of digital resilience. And that’s something we can’t abide.

Digital Resilience: A Better Way to Cybersecurity

CIOReview | September 12, 2016

By Ray Rothrock, CEO, RedSeal

Who says prevention is better than cure? Since the advent of networks and hacking, prevention, coupled with detection, has been the primary cyber strategy to counter cyberattack. But, with the exponential increase in the pace and complexity of digital connections, and sophistication of the attackers, this approach is falling short as the breaches at JP Morgan, IRS, Target and UCLA Health so clearly demonstrated.

“Hide & Sneak.” Playing Today’s Cybersecurity Game

I recently came across a rather nice title for a webinar by A10 Networks’ Kevin Broughton– “Hide & Sneak: Defeat Threat Actors Lurking within your SSL Traffic”. “Hide & Sneak” is a good summary of the current state of the cybersecurity game. Whether our adversaries are state actors or less organized miscreants, they find plenty of ways to hide, stay quiet and observe. They can keep this up for years at a time. Our IT practices of the last few decades have engineered very effective business systems. On the other hand, they are sprawling and complex systems, made up of tunnels, bridges and pipes — much of which is out of sight, unless you take special pains to go look in every corner.

The “Hide & Sneak” webinar focuses on SSL, just one aspect of just one kind of encryption used in just one kind of VPN. This is worthwhile – I mean no criticism of the content offered. But if we think about how complex just this one widely used piece of infrastructure is, and then take a step back to think about this level of detail multiplied across all the technologies we depend on, it’s obvious that it’s impossible for any single security professional to understand all the layers, all the techniques, and all the complexity involved in mission-critical networks. Given staff shortages, it’s not even possible for a well-funded team to keep enough expertise in-house to deal in full depth with everything involved in today’s networks, let alone keep up with the changes tomorrow.

If we can’t even hire experts in all aspects of all the technologies we use, how can we defend our mission-critical infrastructure?

We can break the problem down into three parts – understanding the constantly-shifting array of technologies we use; keeping up with the continuous stream of new defects, issues and best practices; and thinking through the motivations, strategies and behaviors of bad actors. Of these three, the first two are highly automatable (and essentially impossible without automation). The third is the ideal domain for humans – no computer has the wit or insight to think strategically about an intelligent, wily adversary. This is why automation is best focused on understanding the infrastructure, and on uncovering and prioritizing vulnerabilities and defensive gaps.

The best security teams focus human effort on the human problem – understanding the thought patterns of the adversaries, not on learning every detail of every aspect of every technology we use.