Data Dearth Hobbles Cyber Insurance Market

The Deloitte Center for Financial Services just issued a report discussing why cyber insurance has yet to take off. “Demystifying cyber insurance” is an excellent summary of the challenges facing the nascent cyber insurance industry. The authors identify a fundamental problem early in the report: a dearth of data creates a vicious circle that limits both underwriters and customers. Briefly, while cyber insurance underwriters have access to external assessments of the cyber threats a customer faces, the customer’s network itself is a black box.

The situation is analogous to underwriting a life insurance policy based only on the neighborhood the customer lives in. Underwriters ask: Does the neighborhood have indoor plumbing and a modern sewer system?  Is garbage disposed of properly?  Is the community suffering from serious communicable diseases? What criminal activity exists?

All this information is relevant and helpful, but the key missing element is a physical exam of the customer to determine his or her current health profile. Is the applicant overweight? A smoker? An active athlete?  Such an exam provides a much more specific (and actionable) assessment of a customer’s health risk to inform life insurance underwriting.

The same applies to cyber insurance. Underwriters need to understand not only cyber threats in the environment, but also the health of a specific network.  Are all parts of the network identified? Are all network devices set up properly?  Are known vulnerabilities reachable for exploitation?

Ideally, this assessment would involve modeling the network and distilling complicated network security risks into an understandable and comparable score, similar to a credit-worthiness score.  Of course, modeling a network requires a customer’s approval, so the approach must be fast, accurate, and cost-effective.

Cyber insurance promises to be a critical element in effective cyber security management.  The “dearth of data” is a significant obstacle to cyber insurance development, but the effective use of network risk scoring will be crucial to break the vicious circle.

The Bleed Goes On

Some people are surprised that Heartbleed is still out there, 3 years on, as you can read here. What this illustrates is two important truths of security, depending on whether you see the glass half full or half empty.

One perspective is that, once again, we know what to do, but failed to do it.  Heartbleed is well understood, and directly patchable.  Why haven’t we eradicated this by now? The problem is that the Internet is big. Calling the Internet an “organization” would be a stretch – it’s larger, more diverse, and harder to control than any one organization.  But if you’ve tried to manage vulnerabilities at any normal organization – even a global scale one – you have a pretty good idea how hard it gets to eliminate any one thing. It’s like Zeno’s Paradox – when you try to eradicate any one problem you choose, you can fix half the instances in a short period of time. The trouble is that it takes about that long again to fix the next half of what remains, and that amount again for the half after that. Once you’ve dealt with the easy stuff – well known machines, with well documented purpose, and a friendly owner in IT – it starts to get hard fast, for an array of reasons from the political to the technical.  You can reduce the prevalence of a problem really quickly, but to eradicate it takes near-infinite time.  And the problem, of course, is that attackers will find whatever you miss – they can use automation to track down every defect.  (That’s how researchers found there is still a lot of Heartbleed out there.)  Any one time you miss might open up access to far more important parts of your organization.  It’s a chilling prospect, and it’s fundamental to the unfair fight in security – attackers only need one way in, defenders need to cover all possible paths.

To flip to the positive perspective, perhaps the remaining Heartbleed instances are not important – that is, it’s possible that we prioritized well as a community, and only left the unimportant instances dangling for all this time.  I know first-hand that major banks and critical infrastructure companies scrambled to stamp out Heartbleed from their critical servers as fast as they could – it was impressive.  So perhaps we fixed the most important gaps first, and left until later any assets that are too hard to reach, or better yet, have no useful access to anything else after they are compromised.  This would be great if it were true.  The question is, how would we know?

The answer is obvious – we’d need to assess each instance, in context, to understand which instances must get fixed, and which can be deferred until later, or perhaps until after we move on to the next fire drill, and the fire drill after that. The security game is a never-ending arms race, and so we always have to be responsive and dynamic as the rules of the game change.  So how would we ever know if the stuff we deferred from last quarter’s crises is more important or less important than this quarter’s?  Only automated prioritization of all your defensive gaps can tell you.

Shadow Brokers Turn Out the Lights

The Shadow Brokers are turning out the lights. On their way out they dumped another suite of alleged National Security Agency hacking tools.  Unlike last time, where the released exploits focused on network gear from vendors such as Cisco and Fortinet, these tools and exploits target Microsoft Windows operating systems.  Most of the sixty plus exploits are already detected by antivirus vendors, such as Kaspersky, and it is a safe bet that all antivirus vendors will detect them shortly.

In Shadow Brokers’ farewell post, they say they are leaving the account open for someone to deposit 10,000 bitcoins — the equivalent of $8.2 million — to obtain the entire cache of alleged NSA hacking tools. To date, no one has paid the requested amount.  With such a high price it has been speculated that the Shadow Brokers never seriously expected anyone to pay. This leads some to believe they are associated with a nation state who is trying to cause headaches for US spy agencies and the administration.

What can be done to protect your systems from these tools and exploits?  Basic security practices of course.  Keep your systems up to date with patches and operating system releases.  Practice your usual good cyber hygiene such not clicking on links in emails.  Be conscientious about what you plug into your home or business computers as a lot of malware can spread through external hard drives and USB sticks.

Also, it is imperative to have good backups and test your backups.  Many times after a breach occurs, organizations find out too late that they’ve never tested their restore procedures to verify they have good backups. Or, they learn that their backups have been infected with malware from previous backups of compromised systems.

Have an incident response plan in place and practice your incident response plans regularly. Having a plan is great. But you need to practice to make sure your team can execute your plan. Plans without practicing is the equivalent of a firefighter knowing it takes water to put a fire out, but not knowing how to get the water off of the fire truck and onto the fire.

Know your network; and consider using RedSeal.   Even if you don’t use us, knowing your network will lead to greatly enhanced resilience and enable your incident responders to keep business and mission critical systems online and functioning during an incident.  Security is not sexy, despite what Hollywood depicts. There is no silver bullet that will magically make your network impervious.  It takes hard work and continuous effort to build and maintain resilient networks.  So, do you know yours — completely?

RedSeal Cloud Security

On the Way to SDN and the Cloud: Building Resilient Networks

Willis H. Ware, a research scientist at the Rand Corporation working for the United States Air Force in 1967, predicted that ARPAnet would be a disaster if security wasn’t built into the project.

He was overruled.

In January 2013, the Final Report of the Defense Science Board Task Force on Resilient Military Systems and the Advanced Cyber Threat was issued and confirmed what Willis knew back in 1967.

The report’s findings made for sober reading:

  • The United States cannot be confident that our critical information technology systems will work under attack. This is also true for our allies, rivals, public and private networks.
  • The DoD and its contractor base are high priority targets that have already sustained staggering losses of system design information.
  • The DoD should expect cyber attacks to be part of all conflicts in the future, and should not expect enemies to play by our version of the rules.
  • There is evidence of attacks that exploit known vulnerabilities in the domestic power grid and critical infrastructure systems.
  • The impact of a destructive cyber attack on the civilian population would be even greater:
    • In a short time, food and medicine distribution systems would be ineffective.
    • Law enforcement and emergency personnel capabilities could be barely functional in the short term and dysfunctional over sustained periods.
    • Expect physical damage to control systems.
    • Months to years could be required to rebuild and reestablish basic infrastructure operation.

So… the current situation is really bad.

Does cloud computing and the rise of software defined networks (SDNs) make things better? Government and enterprises are receiving huge benefits by moving into the cloud.  You can quickly and efficiently create an SDN, but cloud computing and software defined anything is still software. And software will have errors. How do you test or QA it? Is your central control node secure? How much do you know, really?

If this word “software” doesn’t scare you, then you’re not thinking about it hard enough.

In the Defense Science Board Task Force’s report, the seventh recommendation is to build a cyber resilient force and a set of standards and requirements that incorporate cyber resiliency into the cyber critical survivable mission systems.

What is their definition of resilience?
Resilience: Because the Defense Department’s capabilities cannot necessarily guarantee that every cyber attack will be denied successfully, the Defense Department must invest in resilient and redundant systems so that it may continue its operations in the face of disruptive or destructive cyber attacks on DoD networks.”– Ash Carter, Secretary of Defense, April 2015

The report highlights a need to continuously model and test DoD’s systems to determine how resilient they are. This requires a measurement or a metric for resilience.

Managing and measuring cyber resilience Up until now measuring cyber resilience has been an impossible challenge. Now, RedSeal’s cybersecurity analytics platform has been deployed successfully by federal agencies and departments. With RedSeal you can:

Understand your cyber terrain
You have to understand your cyber terrain in order to secure it, defend it, and respond to incidents appropriately and swiftly.  Operating without understanding your network is like stumbling around your unlit house at night looking for the burglar that just broke in.

Model and measure
With a network sand table, defenders can now see where their high value assets (HVAs) are and answer important questions:

  • How can they be accessed?
  • How exposed are they?
  • Are defenses deployed in the appropriate places?
  • Exactly where are the sensor-reported incidents?

Verify compliance, establish and manage standard policies
RedSeal lets you know if your network is constructed as you think it is –to allow only authorized access to your data. RedSeal reads in information from devices on your network, including those parts hosted in the cloud. Then, it calculates the access actually allowed from any point on your network to any other and updates as changes are made, so you can verify and maintain compliance with regulations and policies.

 Understand the security impact of network changes
RedSeal enables you to simulate attacks before they happen.  You can understand your defensive posture by finding the weak points and measuring ease of compromise.

Understand access in hybrid networks
Cloud providers have cloud solutions to manage your cloud-based network. But most organizations don’t have a pure cloud network; their networks are hybrid. You have some infrastructure that you manage, some in the cloud, and some virtualized. We show organizations how all parts of their networks connect to everything else.

Cloud providers don’t know what your legacy environment looks like. You need to be able to draw together your physical and cloud infrastructure in more than just a picture.  At RedSeal, we believe you have to understand end to end behaviors of your networks. To do this, we do very deep access calculations based on the configuration files of all your network devices – virtual or not.  RedSeal determines how your infrastructure actually works, so you can continually validate that you built what you thought you were building.

You can ask all kinds of questions of your RedSeal network model. You can determine if the back end of your cloud infrastructure is accessible from the internet – and how. You can see paths that reach from the real world to the virtual world. We’ve invested a lot of time and effort at RedSeal, so you can see your cloud infrastructure and how it connects to your physical or virtual infrastructure.

RedSeal provides security metrics  
RedSeal gives you an overview of your network, measuring:

  1. The completeness of your inventory of assets and systems. It identifies devices you may not know about.
  2. All the connections between devices.
  3. How well your network devices are configured for security.
  4. The actual risk to your data, based on how accessible known vulnerabilities are.

RedSeal’s smartphone app provides a measurement and trend summary for executives or “on the go” security management.

Why is the RedSeal Digital Resilience Score important?

  • Gives you a measure of security effectiveness so you know where to allocate resources and funding.
  • Helps you understand your security posture: are you better today than you were yesterday?
  • Allows seniors staff to empirically understand network risk.
  • Grades different networks across various departments or agencies
  • Verifies networks are designed and operating for security as intended

For more on this subject, listen to the free webinar, On the Way to SDN and the Cloud: Building Resilient Networks.

Centralize Cybersecurity? Secretary Pritzker Doesn’t Think So

Last month, Secretary of Commerce Penny Pritzker appeared in front of the President’s Commission on Enhancing National Cybersecurity and the subsequent article in FedScoop caught my attention.

She is very concerned that the President’s Commission could mandate that all US Federal Government information technology be consolidated under one organization’s authority. According to Secretary Pritzker, a mandate like this would make it difficult for an agency’s leadership to enforce cyber security initiatives addressing their specific needs.

In other words, one size does not fit all.

Is she correct to be worried? It may be worthwhile to turn our eyes to our northern neighbor, Canada, where this consolidation is taking place right now. Canada frequently looks to our government before adopting a new practice. In this instance we can learn from their experience.

Currently, the Canadian government, including their equivalent of the Department of Defense and Intelligence community, is reorganizing and consolidating many small agencies into fewer larger agencies called Portfolios. This consolidation is not just on the cyber security front; the entire government is moving from 47 individual agencies to 28. This reorganization and consolidation is causing a lot of internal uproar since many former agency CIOs and CISOs now have to report to someone else. Former leaders no longer have a say in what they used to manage, with the authority moved to others higher up in the organizational chart. Additionally, the Canadian government is consolidating their 308 data centers into 40 to 80 super data centers. This will be a huge undertaking similar to our consolidation into Trusted Data centers. It is still too early to know if it will be worth the growing pains. But, I wonder if Canada’s governmental eye is being taken off the cyber ball.

Secretary Pritzker raises some interesting questions that we should fully consider:

  1. Is over- or under- centralization a root cause of the government’s less-than-perfect response to cybersecurity?
  1. Where should “authority, responsibility and capability” (and budget!) for improving cybersecurity lie? A White House cyber czar? The new federal CISO? The Cabinet Secretary level?
  1. Is a hybrid approach best? A mix of centralized cybersecurity services with agency specific toolsets?
  1. Should there be a united fedciv.gov network like .mil? A unified email system for all fedciv employees?
  1. As the Canadians are doing, would it be better to reorganize cybersecurity efforts independently of the agencies they serve rather than doing everything all at once?

All in all, there are a lot of similarities between what is currently happening in Canada and the organizational recommendations that may come out of the President’s commission. I’m suggesting the US could learn a lot from our northern neighbor and ally.

Micro-Segmentation: Good or Bad?

There’s a lot going on in virtual data centers. In security, we’re hearing many variations of the term “micro-segmentation.” (It originated from VMWare, but has been adopted by other players, some of them adding top-spin or over-spin.)

We know what segmentation is. Every enterprise network practices segmentation between outside and inside, at least. Most aim to have a degree of internal segmentation, but I see a lot more planning than doing — unless an audit is on the line. Many networks have a degree of segmentation around the assets that auditors pay attention to, such as patient records and credit cards. There are organizations further up the security sophistication curve who have a solid zone-based division of their business, can articulate what each zone does and what should go on between them, and have a degree – at least some degree – of enforcement of inter-zone access. But these tend to be large, complex companies, so each zone tends to be quite large. It’s simple math – if you try to track N zones, you have to think about N2 different relationships. That number goes up fast. Even well-staffed teams struggle to keep up with just a dozen major zones in a single network. That may not sound like a lot, but the typical access open between any two zones can easily exceed half a million communicating pairs. Auditing even one of those in full depth is a super-human feat.

Now along comes the two horses pulling today’s IT chariot: the virtual data center and the software defined network. These offer more segmentation, with finer control, all the way down to the workload (or even lower, depending on which marketing teams you believe). This sounds great – who wouldn’t want super-fine controls?  Nobody believes the perimeter-only model is working out any more, so more control must be better, right?  But in practice, if you just throw this technology onto the existing stack without a plan for scaling, it’s not going to work out.

If you start with a hard-to-manage, complex management challenge, and you respond by breaking it into ever smaller pieces, spread out in more places, you can rapidly end up like Mickey Mouse in The Sorcerer’s Apprentice, madly splitting brooms until he’s overrun.

Is it hopeless? Certainly not. The issue is scale. More segmentation, in faster-moving infrastructure, takes a problem that was already tough for human teams and makes it harder. But this happens to be the kind of problem that computers are very good at. The trick is to realize that you need to separate the objective – what you want to allow in your network – from the implementation, whether that’s a legacy firewall or a fancy new GUI for managing policy for virtual workloads. (In the real world, that’s not an either/or – it’s a both, since you have to coordinate your virtual workload protections with your wider network, which stubbornly refuses to go away just because it’s not software defined.)

That is, if you can describe what you want your network to do, you can get a big win.  Just separate your goals from the specific implementation – record the intention in general terms, for example, in the zone-to-zone relationships of the units of your business. Then you can use automation software to check that this is actually what the network is set up to do.  Computers don’t get tired – they just don’t know enough about your business or your adversaries to write the rules for you. (I wouldn’t trust software to figure out how an organism like a business works, and I certainly wouldn’t expect it to out-fox an adversary. If we can’t even make software to beat a Turing Test, how could an algorithm understand social engineering – still a mainstay of modern malware?)

So I’m not saying micro-segmentation is a bad thing. That’s a bit like asking whether water is a bad thing – used correctly, it’s great, but it’s important not to drown. Here, learning to swim isn’t about the latest silver bullet feature of a competitive security offering – it’s about figuring out how all your infrastructure works together, and whether it’s giving the business what’s needed without exposing too much attack surface.

Hol(e)y Routers, Batman!

Most people think about network infrastructure about as much as they think about plumbing – which is to say, not at all, until something really unfortunate happens. That’s what puts the “infra” in the infrastructure – we want it out of sight, out of mind, and ideally mostly below ground. We pay more attention to our computing machinery, because we use them directly to do business, to be sociable, or for entertainment. All of these uses depend critically on the network, but that doesn’t mean most of us want to think about the network, itself.

That’s why SEC Consult’s research into exploitable routers probably won’t get the attention it deserves. That’s a pity – it’s a rich and worthwhile piece of work. It’s also the shape of things to come, as we move into the Internet of Things. (I had a great conversation a little while ago with some fire suppression engineers who are increasingly aware of cyber issues – we were amused by the concept of The Internet of Things That Are on Fire.)

In a nutshell, the good folks at SEC Consult searched the Internet for objects with a particular kind of broken cryptography – specifically, with known private keys. This is equivalent to having nice, shiny locks visible on all your doors, but all of them lacking deadbolts. It sure looks like you’re secure, but there’s nothing stopping someone simply opening the doors up. (At a minimum, the flaw they looked for makes it really easy to snoop on encrypted traffic, but depending on context, can also allow masquerading and logging in to control the device.)

And what did they find when they twisted doorknobs? Well, if you’ve read this far, you won’t be surprised that they uncovered several million objects with easily decrypted cryptography.  Interestingly, they were primarily those infrastructure devices we prefer to forget about.  Coincidence? Probably not. The more we ignore devices, the messier they tend to get. That’s one of the scarier points about the Internet of Things – once we have millions or billions of online objects, who will take care of patching them? (Can they be updated? Is the manufacturer responsible? What if the manufacturer has gone out of business?)

But what really puts the icing onto the SEC Consult cake is that they tried hard to report, advertise, and publicize everything they found in late 2015. They pushed vendors; they worked with CERT teams; they made noise. All of this, of course, was an attempt to get things to improve. And what did they find when they went back to scan again? A 40% increase in devices with broken crypto! (To put the cherry onto that icing, the most common device type they reported before has indeed tended to disappear. Like cockroaches, if you kill just one, you’re likely to find more when you look again.)

So what are we to conclude? We may wish our infrastructure could be started up and forgotten, but it can’t be. It’s weak, it’s got mistakes in it, and we are continuously finding new vulnerabilities. One key take-away about these router vulnerabilities: we should never expose management interfaces. That sounds too trivial to even mention – who would knowingly do such a thing?  But people unknowingly do it, and only find out when the fan gets hit. When researchers look (and it gets ever easier to automate an Internet-wide search), they find millions of items that violate even basic, well-understood practices. How can you tell if your infrastructure has these mistakes? I’m not saying a typical enterprise network is all built out of low-end routers with broken crypto on them. But the lessons from this research very much apply to networks of all sizes. If you don’t harden and control access to your infrastructure, your infrastructure can fail (or be made to fail), and that’s not just smelly – it’s a direct loss of digital resilience. And that’s something we can’t abide.

“Hide & Sneak.” Playing Today’s Cybersecurity Game

I recently came across a rather nice title for a webinar by A10 Networks’ Kevin Broughton– “Hide & Sneak: Defeat Threat Actors Lurking within your SSL Traffic”. “Hide & Sneak” is a good summary of the current state of the cybersecurity game. Whether our adversaries are state actors or less organized miscreants, they find plenty of ways to hide, stay quiet and observe. They can keep this up for years at a time. Our IT practices of the last few decades have engineered very effective business systems. On the other hand, they are sprawling and complex systems, made up of tunnels, bridges and pipes — much of which is out of sight, unless you take special pains to go look in every corner.

The “Hide & Sneak” webinar focuses on SSL, just one aspect of just one kind of encryption used in just one kind of VPN. This is worthwhile – I mean no criticism of the content offered. But if we think about how complex just this one widely used piece of infrastructure is, and then take a step back to think about this level of detail multiplied across all the technologies we depend on, it’s obvious that it’s impossible for any single security professional to understand all the layers, all the techniques, and all the complexity involved in mission-critical networks. Given staff shortages, it’s not even possible for a well-funded team to keep enough expertise in-house to deal in full depth with everything involved in today’s networks, let alone keep up with the changes tomorrow.

If we can’t even hire experts in all aspects of all the technologies we use, how can we defend our mission-critical infrastructure?

We can break the problem down into three parts – understanding the constantly-shifting array of technologies we use; keeping up with the continuous stream of new defects, issues and best practices; and thinking through the motivations, strategies and behaviors of bad actors. Of these three, the first two are highly automatable (and essentially impossible without automation). The third is the ideal domain for humans – no computer has the wit or insight to think strategically about an intelligent, wily adversary. This is why automation is best focused on understanding the infrastructure, and on uncovering and prioritizing vulnerabilities and defensive gaps.

The best security teams focus human effort on the human problem – understanding the thought patterns of the adversaries, not on learning every detail of every aspect of every technology we use.

RedSeal CEO Ray Rothrock Talks Cybersecurity on Mad Money w/ Jim Cramer

Our CEO Ray Rothrock shared the latest on cybersecurity as a guest on Mad Money with Jim Cramer (CNBC) today, covering a variety of topics – from why perfect firewall management doesn’t provide perfect protection, to the risk of a hacking attack on electrical grids and nuclear power plants.

Credit: CNBC

Some highlights:

Jim: What goes into my digital resilience score?

Ray: There are three things that really matter. First is configuration checks. You’ve got all this equipment—network equipment—it’s probably configured by really good people, but it may not be perfect. We can assign that.

Vulnerabilities—that’s what everyone talks about. Vulnerabilities are interesting but you need to know where it is in the network. Is it reachable for the bad guys on the outside? We can tell you that. So why spend all your time scanning and fixing a computer that’s not reachable? That’d be a waste of your time and money.

And the third thing – and this is what gets the CISOs quite nervous – it’s called the incomplete model.

Learn more about how you can make measure your organization’s digital resilience score by contacting us here.

Update: Responding to the Shadow Broker Vulnerabilities

Last week, the Shadow Brokers hacker group made national headlines by leaking zero-day firewall vulnerabilities, and offering additional exploits for sale through auction. In response, the RedSeal team produced:

  1. A blog post on how major infrastructure vulnerabilities produce the same questions – and how digital resilience puts organizations in the best position to respond.
  2. A step by step “how-to” that shows how network teams can use RedSeal to understand their potential exposure – and to what degree.
  3. A video demonstration of how defenders can use RedSeal to understand the extent of the problem in their specific network.

The feedback we received was tremendous, and we wanted to share a response we received from a customer:

“I sent it out to several of our key users here because I love when you guys do this.  It enabled me to highlight that RedSeal is useful for zero days when there is no patch…

Funny timing as well by the way – the order to identify affected firewalls just came out this morning and we have to respond by tomorrow, so I spent the day researching and working on something before I remembered you sent this and made my life easier. So thank you.”

Have questions, or want to understand how RedSeal can help you with the next inevitable vulnerability hack? Contact us here.