Posts

7 Common Breach Disclosure Mistakes

Dark Reading | December 7, 2018

When a breach happens, speed and clarity are vital, adds Mike Lloyd, CTO at RedSeal. Organizations that have fared badly after a breach have always been the entities that mishandled the disclosure, took too long to disclose, miscommunicated the details, or tried to cover up the issues, he says.

“There is always a surprise factor when you realize someone has broken in, but the better you know your own organization, the faster you can respond,” Lloyd says.

Which is more valuable – your security or a cup of coffee?

The drumbeat of media coverage of new breaches continues, but it’s useful sometimes to look back at where we’ve been.  Each scary report of so many millions of records lost can be overwhelming.  It certainly shows that our network defenses are weak, and that attackers are very effective.  This is why digital resilience is key – perfect protection is not possible.  But each breach takes a long time to triage, to investigate, and ultimately to clean up; a lot of this work happens outside the media spotlight, but adds a lot to our sense of what breaches really cost.

Today’s news includes a settlement figure from the Anthem breach from back in 2015 – a final figure of $115 million.  But is that a lot or a little?  If you had to pay it yourself, it’s a lot, but if you’re the CFO of Anthem, now how does that look?  It’s hard to take in figures like these.  So one useful way to look at it is how much that represents per person affected.

Anthem lost 79 million records, and the settlement total is $115 million.  This means the legally required payout comes out just a little over a dollar per person – $1.46 to be exact.

That may not sound like a lot.  If someone stole your data, would you estimate your loss to be a bit less than a plain black coffee at Starbucks?

Of course, this figure is only addressing one part of the costs that Anthem faced – it doesn’t include their investigation costs, reputation damage, or anything along those lines.  It only represents the considered opinion of the court on a reasonable settlement of something over 100 separate lawsuits.

We can also look at this over time, or over major news-worthy breaches.  Interestingly, it turns out that the value of your data is going up, and may soon exceed the price of a cup of joe.  Home Depot lost 52 million records, and paid over $27 million, at a rate of 52 cents per person.  Before that, Target suffered a major breach, and paid out $41 million (over multiple judgements) to around 110 million people, or about 37 cents each.  In a graph, that looks like this:

Which is more valuable – your security or a cup of coffee?

 

Note the escalating price per affected customer. This is pretty startling, as a message to the CFO.  Take your number of customers, multiply by $1.50, and see how that looks.  Reasonably, we can expect the $1.50 to go up.  Imagine having to buy a Grande Latte for every one of your customers, or patients that you keep records on, or marketing contacts that you track.  The price tag goes up fast!

Uber Hack: A Bad Breach, But A Worse Cover-Up

The Uber hack is a public lesson that a breach may be bad, but a cover-up is worse.  (See Nixon, Richard.)  It was a foolish mistake to try to hide an attack of this scale, but then, the history of security is a process where we all slowly learn from foolish mistakes.  We live in an evolutionary arms race – our defenses are forced to improve, so the attackers mutate their methods and move on.  Academically, we know what it takes to achieve ideal security, but in the real world, it’s too expensive and invasive to be practical.  (See quantum cryptography for one example.)  Companies rushing to grow and make profits (like Uber) aggressively try to cut corners, but end up finding out the hard way which corners cannot safely be cut.

It’s likely that the stolen data was, in fact, deleted.  Why?  On the one hand, we would likely have seen bad actors using or selling the data if it were still available.  That is, from the attacker’s point of view, data like this is more like milk than cheese – it doesn’t age well.  Many breaches are only detected when we see bad guys using what they have stolen, but nobody has reported a series of thefts or impersonations that track back to victims whose connection is that they used Uber.

But we can also see that the data was likely deleted when we think about the motives of the attackers.  Our adversaries are thoughtful people, looking for maximum payout for minimum risk.  They really don’t care about our names, or trip histories, or even credit card numbers – they just want to turn data into money, using the best risk-reward tradeoff they can find.  They had three choices: use the data, delete it, or both (by taking Uber’s hush money, but releasing the data anyway).  The problem with “both” is thieves are worried about reputation – indeed, they care more about that than most.  (“To live outside the law, you must be honest” – Bob Dylan.)   Once you’ve found a blackmail victim, the one thing you don’t do is give up your power over them – if the attackers took the money but then released the data anyway, they could be sure Uber would not pay them again if they broke in again.  The cost/benefit analysis is clear – taking a known pot of money for a cover-up is safer and more repeatable than the uncertain rewards of using the stolen data directly.

What Equifax Tells Us About Cybersecurity

What Equifax Tells Us About Cyber Security

By Richard A. Clarke

This month it is Equifax. Previously it was Yahoo and before that Target. Each new breach seems to set a new record of how many pieces of personal identifiable information have been compromised. It is easy to get inured to these news stories, especially since the media generally does not deduce any lessons from them. Many people come away thinking that data breaches are just something that we have to accept. But do we? What are we to take away from these recurring stories about huge hacks?

I have been working on cybersecurity for two decades now, initially from the White House and now in the private sector. Here is what I think should be our reaction to the Equifax story and similar breaches.

First, it is not impossible to secure major networks. Some companies and government agencies have quietly achieved sufficiently secure networks that they do not experience major data losses. It is, however, not easy to achieve.

Second, the essential ingredient to securing a network is not software or hardware. It is people – trained and skilled people. This country has an extreme shortage in such personnel. Despite the good salaries that are available in cybersecurity, there is a mismatch between what colleges are producing and what is needed. Colleges are simply under-producing cybersecurity graduates. There are hundreds of thousands of vacant jobs and even more positions that are being filled by under qualified staff.

Most colleges produce computer science majors or have graduate programs, however, they do not require education in cybersecurity as a condition for obtaining those degrees. Although it is sometimes derided by computer science faculty as too much like a “trade” and insufficiently academic, the truth is that cybersecurity is more difficult than basic computer science. Cybersecurity skills are built on top of knowledge about computer science.

In the absence of a focused and funded national initiative to significantly increase the number of cybersecurity trained graduates, corporations and government agencies will continue to fail at securing sensitive data.

Third, securing networks is expensive. Most companies spend only 3-5 percent of their Information Technology budget on security. These are the companies that get hacked. Most corporations have never properly priced in the cost of cybersecurity to their overall cost of doing business. There is a popular misconception in the business world about what it costs to run a major network. The original cost of security for a network was relatively low in the 1990s when most companies began building out their information technology infrastructure. The threat environment was significantly more benign then than it is now. Moreover, the security products available in the 1990s were limited to relatively inexpensive anti-virus, firewalls, and intrusion detection/prevention systems.

Today’s large networks require encryption, network discovery, threat hunting, data loss prevention, multifactor authentication, micro-segmentation, continuous monitoring, endpoint protection, intelligence reporting, and machine learning to detect and prioritize anomaly alarms. Corporations can no longer accurately be described in categories such as airlines, banks, or hospitals. They are all more accurately thought of as computer network companies that deal in aircraft, money management, or patients. If your company cannot do its business when your network goes down, then you are first and foremost an information technology company, one that specializes in whatever it is you do.

Fourth, because almost every American has now had their personally identifiable data stolen in one of these breaches, it should no longer be acceptable to use (or request) social security numbers, dates of birth, mother’s maiden names, and other publicly available identifiers to authenticate a user. Stop using them. Alliances of corporations should develop other, more advanced forms of identification that they would all use. In the jargon of the tech world, what we need are federated (more than one company employing it), multi-factor authentication. Even the government could use one or more of such systems, but if the government creates it there will be push-back from those fearing government abuse of civil liberties.

Finally, many companies and executives in them will continue to mismanage corporate cybersecurity and divulge sensitive data in the absence of significant penalties for failure. Today, even CEOs who are dismissed because of data breaches walk away with eye watering bonuses and severance packages. They do not suffer personally for their failure as managers.

Former White House cybersecurity official Rob Knake has observed that oil companies only got serious about oil spill prevention when they began to be fined based on the number of gallons that they spilled. He suggests that we hit companies that lose personally identifiable data with a heavy penalty for each bit of data compromised. In addition, companies should be required by federal law (not by the existing hodge-podge of conflicting state laws) to notify the government and individuals promptly when data has been compromised.

In sum, major cyber breaches do not have to be a regularly occurring phenomenon. They can be significantly reduced if we as a nation have a program to produce many more trained cybersecurity professionals, if corporations appropriately price in the cost of security, and if there are real financial consequences for companies that spill personal data into the hands of criminals and hostile nations.

Richard A. Clarke was Special Advisor to the President for Cybersecurity in the George W. Bush Administration and is the author of eight books including CYBER WAR.

What SendGrid can teach us about dependency

The watch-word for the SendGrid breach is “interdependence”.  In the online world, we may think we’re dealing with one company, but we’re actually dealing with them and with every other company they choose to deal with.  This makes an ever-widening attack surface.  (The breaking news about the Chinese “Great Cannon” software shows similar patterns.)  These days, if you visit a website, you can be confident you are actually talking to a huge variety of other organizations who may provide ads, services, traffic monitoring, or any other legitimate services.  One recent study of a popular news site showed that reading a simple news story meant your browser spoke to 38 distinct hosts, spread across no less than 20 different organizational domains!  The problem is that this array of services is very large, and a chain is only as strong as its weakest link.  Attackers only need to find one weak point to start an attack.

US & UK Joint Wargames – let’s not wait for Pearl Harbor

The idea of the US and UK working together on war-games is a good one.  It recognizes that we are in a war, and that we are losing.  We need to improve our defensive game.  Chris Inglis, the former NSA director, has commented that the state of security today massively favors the attacker – he suggests that if we kept score, it would be 462-456, just 20 minutes into the game, because our defense is so poor.

The continuous stream of announcements of new breaches, along with the UK stats indicating the vast majority of large companies are suffering serious breaches, adds up to clear evidence of weak defense.  War games are a good way to get one step ahead, shifting to a proactive rather than purely reactive stance.  Nation states can do this with teams of people, but this is too labor intensive and expensive for most organizations.  This is why the security industry puts so much emphasis on automation – not just the automated discovery of weaknesses, but automating the critical process of prioritizing these vulnerabilities.  The inconvenient truth is that most organizations know about far too many security gaps to be able to fix them all.  War-gaming is a proven approach to dealing with this reality – find the gaps that are most likely to be used in a breach, and fix those first.  Perfect security is not possible, but realistic security comes from understanding your defensive readiness, stack-ranking your risks, and acting on the most critical ones.

Cyber Infrastructure – the Fifth Domain

Cyber Infrastructure – the Fifth Domain
The last couple of years has seen an incredible rise in reported incidents of cyber attacks.  Research by many organizations, including Check Point Software and Verizon DBIR, indicate that it’s not a reporting bias, cyber attacks are indeed on the rise.  The good news for us all, as the New York Times reported, is that President Obama is stepping up the nation’s cyber defenses to meet this threat.

Our nation’s economy and well-being are totally dependent on our networks. To keep our economy moving, information flowing, and ourselves informed, we need to protect and defend these networks. Our cyber infrastructure has become the fifth domain a sovereign nation needs to protect – after air, land, sea and space.

Network Security isn’t a Safety Guarantee
Cyber defense isn’t trivial or easy or cheap.  And there are thousands of network security products to choose from. These products usually serve specific purposes in a defense strategy.  For example, firewalls, among many things they do, protect the gate through which information flows, like the locks on your door.   Intrusion detection on a network is like motion detectors in your home. They can tell you something is happening, but can’t always discriminate between acceptable and bad activity.

When networks are larger, they’re more complex, often overwhelming teams trying to make sense of a breach.  There are scores of reporting systems that provide real-time data about break-ins.  But even those are not always as useful as management would like. Dave Dewalt’s story on 60 Minutes recently is typical.

But even with the best people, plans, and essentially an unlimited budget like JP Morgan, companies still get hacked. Why aren’t our networks more secure? Why is a breach in the news every day?  Because, as our President agrees, it’s time to harden our networks.

Network Hardening: Getting Ahead of Cyber Attackers
Network hardening requires many things.  First, it means understanding your network — every element, every device and every path possible.  It means understanding potential threats and having outside intelligence about where the threats originate.  It means focusing your limited resources on the most important things you can do to protect your business.

RedSeal’s mission is to help Global 2000 organizations harden their networks. It gives you the detailed information you need — how your network routes traffic, detailed paths from everywhere to everywhere and how ready your equipment is.  It helps you determine where you should focus your resources and what exactly you can do to harden your network – from the most risky or vulnerable places to the least.  Prioritization is key to getting ahead of the cyber attackers.

Securing Your Network, or Networking for Security?

Every day we hear about another breach, and most of the time the information we get is fairly consistent – the breach started and finished long before it was discovered.    It’s not always clear exactly how or where the attackers were able to get access because they’ve had ample time to cover their tracks.   Whatever log or history data we have is massive, and sifting through it to figure out anything about the attack is very difficult and time consuming.  We don’t quite know what we’re looking for and much of the evidence has come and gone.

As I survey the cybersecurity market and media coverage, I notice that:

  1.   We’ve thrown in the towel, it’s “not if, but when” you’ll be breached.
  2.   Many security vendors are now talking about analytics, dashboards, and big data instead of prevention.

person-thinking-networkNotably absent is the acknowledgement that the attack did not happen at a single point or computer, and that the actual theft of data was allowed because the data looked like legitimate network traffic using allowed routes through and out of the network.

We hear a lot about not having enough “security expertise”.  Is that really the problem?  Or is the problem that the security experts don’t really understand the full complexity of their networks?  The network experts understand.  These attacks are happening via network traffic – not on a device, nor with a known signature.   And what do networking professionals care about?  Traffic, and how it’s flowing.   I maintain that there’s a lot more expertise that could help in this breach analysis and prevention than we think – we’re just not asking the right people.

In subsequent posts I’ll talk about why the networking team is becoming vital to security efforts, and why understanding how a network is constructed and performs is the best chance we have of improving our defenses.

Anticipating attack: top 10 ways to prevent a breach

Last week, I spent most of my time in a conference room at RedSeal headquarters presenting our RedSeal Certification training to a mix of our customers and recent additions to the RedSeal team. Showing those in attendance the broad set of capabilities of the system reminded me how important it is to be very clear about the steps for anticipating attack and putting together automation and operations to protect your enterprise and its assets.

telescope-smaller_0Here is my top 10 list:

  1. Scan your hosts for vulnerabilities
  2. Prioritize and schedule patching
  3. Place modern security controls at all ingress and egress points
  4. Monitor all ingress and egress traffic, triggering alerts and interception of inappropriate traffic
  5. Standardize your device configurations
  6. Create a set of network security zones
  7. Review your network’s access paths
  8. Compare access to network security policy
  9. Track approvals of access between critical zones
  10. Monitor and report on access found each day

How does your approach compare to this list? What do you think I’m missing? Is there anything I included that you think shouldn’t be here?

Identify and Close Before the Bad Actor Exploits

It happened again yesterday. I was taking a break on my back porch and listening to the Colorado summer rain when an alert hit my phone: news of another breach. They seem to be coming with a disturbingly increasing regularity and with ever more serious consequences. For example, one company, Code Spaces, was completely destroyed when they refused to pay an attacker who then destroyed their customers’ data. The Energetic Bear group accessed utilities’ networks and could have launched attacks against them. In all likelihood, the number, extent, and veracity of these attacks will simply continue to expand.

telescope-smaller_0So what do you do? The good news is that the steps are well known and understood: place security controls into your network to isolate a set of subnetworks (typically called “zones”) and both set and monitor the potential access paths between the zones. This is the first set of defenses against attacks, and one which many organizations do not fully deploy.

It is common for me to see organizations that partially deploy zones – but do not monitor their implementation. This is akin to the multi-petabyte database that contains one incorrect byte of information: you can trust none of the information as a result.

So, the first step is to create clear and concise zones in your network and to analyze all potential access paths through your network to be sure that your zone rules are respected network-wide.

Do you do this? If so, what’s your approach?