Tales from the Trenches: Vol 4 — Leveraging the Tools You Already Have

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

In this edition of the series Chris Naish, Sr. Sales Engineer, Federal at RedSeal explores prioritizing your risk mediation with RedSeal.

Leveraging the Tools You Already Have

Sometimes, you just need help understanding what you already have the ability to do…

Often while walking with customers along their RedSeal journeys, they’ll ask me, “Hey, what’s this Risk tab?”…

To prepare them for the coming screen of boxes of different colors and sizes, I preface the conversation by saying, “This might look intimidating at first, but I promise it’s not. It will make more sense shortly.” …

I’ll first take a brief detour to the Vulnerabilities tab in RedSeal and reiterate how on this tab, you’re essentially looking at the vulnerabilities in your environment one at a time. For any selected vulnerability, you’re able to see the related Host Count in the top frame, as well as the actual number of instances in the bottom frame (these counts may differ if the vulnerability in question can affect a host on more than one port).

Next, I’ll move over to the Risk tab and explain that by way of contrast, each of the boxes of different colors and sizes on the Risk map represents one of the hosts in your network. You can select any host and get related details in the bottom frame, including the vulnerabilities on that host.

But *why* are they all different colors and sizes?

The key to understanding the Risk Map layout is to click on Risk Map Controls on the left-hand side. Here you’ll be shown a series of drop-down menus, each with multiple options, which dictate how the host boxes appear, as well as how they’re grouped.

With this foundation laid, I explain that the main use case of the Risk tab is determining Mitigation Priority according to YOUR specific RedSeal topology. Say for example that you’re working with someone new to your patching team, who’s only responsible for Campus hosts. And they’re sitting next to you while you show them RedSeal’s capabilities. After a brief detour to Maps & Views to show them a RedSeal topology map that includes a Campus area, I might go back to the Risk tab and make this distinction: if you show them a simple Risk view, it may be perceived as overwhelming if you have a fair amount of vulnerabilities in your ENTIRE network that need to be patched. By way of contrast, if you INSTEAD manipulate the Risk Map Controls (and save the resulting layout) to display a Topology-based Mitigation Priority View, now the host(s) of concern for the Campus portion of your network can easily be seen. This can be done via the following drop-down menu selections: Group: First By Topology, Then By Primary Subnet; Appearance: Color By Downstream Risk, Size By Risk.

At this point, a customer’s wheels usually start turning and ideas come forth on how to make use of these concepts in THEIR RedSeal model and increase its’ value.

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.  

Tales from the Trenches: Vol 3 — Security Operations and Network Operations are always at odds. Or are they?

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

In this edition of the series Brad Schwab, Senior Security Solutions Consultant tackles the potential friction between two departments with RedSeal.

Security Operations and Network Operations are always at odds. Or are they?

Empirically, using the greatest technical brevity, you could explain the two areas as:

  • Security Operations (SecOps) is about limiting where network traffic goes. They are also usually responsible for Vulnerability Scanners
  • Network Operations (NetOps) is about the uninterrupted, fast, network traffic flow

As you can see, these departments could easily be at odds – one is the brakes, the other the throttle. So, yes, they usually are at odds. Everything one wants can easily create work for the other, resulting in a back-and-forth pendulum of requests. SecOps to NetOps, “I need these ports shut down, they are creating security exposure…” NetOps to SecOps, “sure, you deal with the backlash…” Outcome, Finance to NetOps, “I can’t print paychecks…” NetOps emails SecOps the Finance department email and goes dark…. Net affect, neither department likes the other…

How does this involve RedSeal you may ask? RedSeal is in the unique position to work with both SecOps and NetOps and help both realize their Operational Goals and allow visibility into outcomes beforehand so that situations like the above don’t happen. This creates a positive working relationship between the teams.

Working with a large Health Organization, we were at the end of a Proof of Concept, and were having a meeting with the CISO, and the heads of SecOps, “Wendy”, and NetOps, “Bill”. We had been having problems with the NetOps people not providing the access required to gather device configuration files across the entire network fabric. NetOps was claiming they didn’t have the time. On the sly, we also heard that they thought providing us with the ability to gather the configs would only make work for them.

During this meeting Wendy was talking about the mountain of scan data she had and that prioritization was key to her work. I demonstrated how RedSeal could prioritized her patching routine(s) based on Network Access. Which, wait for it, requires the network device configuration files. Knowing that SecOps and NetOps were not friends, I decided to see if I could get a dialogue going, and at the same time incent NetOps to get us the access we required to gather the config files. So, I posed the question to Wendy of SecOps, “Wendy, are you scanning the entire network?” She said “yes.” I asked how she knew she could reach every host on the entire network? She said, “Bill told me I could.” I then said to Bill, “Is that what goes on the upcoming Audit, “Bill told me I could scan the entire network…”. Before he could reply I said, wouldn’t you like actual documentation showing that Wendy’s network reach was complete and entire network scans could and were taking place?

That is when the CISO chimed in and said “yes, we need that. How do we get that?” I said that if I had all the config files, I could provide you, Ms. CISO with your required audit documentation, which would eliminate Bills’ manual effort to supply the always asked for “ACL’s of all devices, not just the edge.” And, in addition, I can prioritize Wendy’s mitigation procedures and show the actual trending decrease in your exposure as her team works through the tickets. What followed was over an hour-long discussion of how RedSeal could provide value, focusing of tasks, and reduction of effort to both NetOps and SecOps, plus provide progress reporting to the CISO.

During the rest of the deployment NetOps was always willing to listen and quickly respond quickly to requests. The final outcome was that both teams, SecOps and NetOps, embraced the RedSeal Secure Methodology of Discover, Investigate and Act.

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.  

Tales from the Trenches: Vol 2 — They have access to WHAT?!

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

In this edition of the series, Nate Cash, Senior Director, Federal Professional Services/ Director of Information Security at RedSeal looks at a unique application for RedSeal, eliminating potential threats before they could happen.

They have access to WHAT?!

I’m always surprised at the new use-cases we come up with on site with RedSeal. There is a lot of information about a customer’s environment that allows us to answer questions pretty easily, if you know where to look. One Monday morning as I showed up to the office, before I was able to grab coffee, a SOC analyst stopped me at the door to ask me a very simple question, “We have a bunch of site-to-site VPNs with a few business partners, what can they access?”

On the surface this seems like a simple request, “How quickly do you need this information?” I responded. “Last Saturday.” Suddenly the caffeine rush I needed seemed like a long ways off. Turns out, one of the business partners of this customer was breached over the weekend and they notified this customer of the potential. The SOC had been manually mapping out the configs and drawing the paths on the white board, when I came in.

Staring at the board I thought, “There has to be a better way.” When the eureka moment happened. I fired up RedSeal and found the devices which lead to the business partners. In order to map across the VPN tunnels, we needed the config files from the other end. Of course, our business partners were not going to give that up, so I took the configs from our end, and reversed the ACLs changed the IPs based on the tunnel configs and reimported them into RedSeal.

10 min later we were able to answer the question, “what does this business partner have access to?” the answer, 1 server in the DC on 3 ports. But that one server had access to over 20 other servers which increased the downstream risk. This was a fun exercise where we got to see the power of RedSeal and how it can be used to quantify the real risk to the organization and reduce it by putting controls into place.

Once the incident was contained, we decided to go through, and hand jam the rest of the business partners VPN configurations into the model so the SOC would have this information in the event another partner was compromised. After writing up the configs and placing them into the model we found a couple of business partners with configurations that allowed any traffic on any port across.

Firewall engineers know, that sometimes during troubleshooting they’ll configure a device to allow anything across and verify that the firewall is or is not blocking an application. My theory is that these two partners were having issues with traffic going across or getting the VPN tunnel up, and put these rules in as placeholders, probably around two or three A.M. and when things “worked” they were going to come back later to fix them, and never got the time.

But this exercise allowed us to find a major security misconfiguration. If one of those two business partners with the ‘allow any traffic’ were the ones compromised over the weekend this story would have had a much different outcome. In security having a complete picture of your environment is key to being secure. What you don’t know can hurt you.

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.  

Tales from the Trenches: Vol 1 — In security, consistency is key.

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

The first in this series is by Nate Cash, Senior Director, Federal Professional Services/ Director of Information Security at RedSeal.

In security, consistency is key.

Once at a customer site while going through the install of RedSeal, we were going over the hardening standards. I clicked on a couple of configurations to start showing how we could go about setting up best practice checks. I had inadvertently pulled up a device which has not been updated in over five years. The customer was shocked, this is one of the many times where I have had to stop mid-sentence while the person I worked with, reached out to someone to “fix the problem.”

The problem is not the fact the device is had not been updated, but somehow their process missed it. This device was just one of many. The first thing we did with RedSeal was develop a set of custom checks to see how many devices passed or failed the latest hardening standard. Once set we started data collections. In 15 min we saw 30% of the devices were not running their latest hardening standards. 

30% of their network devices were not using the latest encryption, had management ACLs set to an old subnet which was now known as their guest subnet, inconsistent SSH versions, telnet still enabled, and some devices pointed to old radius servers, falling back to local accounts. Luckily their firewall blocked the guest subnet from anything internal but, their network management tool still couldn’t access 100% of the devices.

Fixing the underlying problem, not the result of the problem.

With the latest hardening standards in hand, the network engineers got an emergency change request and started logging in to update their configurations. Each device required firmware upgrades, followed by configuration changes. While this was a big undertaking, each day that passed we could grab the numbers and see trending on remediation and report to management.

Once we got the model complete, we found a couple of firewalls that blocked their network management solution from accessing 100% of their network devices. Once the customer opened their firewall, the networking team could start pushing the config to all of the devices. 

As luck would have it, the customer was currently undergoing a review of a new hardening standard. We took the new standard, and I showed them how to create checks for each of the configuration points. At the end of the day, the customer had 75 individual checks for each of the network devices. Upon data collection RedSeal will run those checks against each of the configurations automatically and we could ensure that all network devices passed all of checks required for their baseline configuration. 

This customer had a unique process where devices about to be deployed were plugged into the network and received a static DHCP address. Their network management tool would push the baseline to the config, then the engineers would login the next day to assign the interface Ips per the documentation. The rest of the hardened config was automagically configured by the tool. With this in hand, we were able to add the ‘staging area’ to RedSeal. With every data collection we noticed random inconsistencies where some device would get the whole config and others did not, using those same checks. 

Using RedSeal and custom checks this customer is able to push configs, then double check the config took properly before deploying out into their network. They had better visibility ensuring that all of their devices were hardened, were more confident that automating the work were consistently driving the results they wanted, had a double check with the automation, and essentially reduced their risk significantly, just by checking the configs of their network devices.

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.  

InfoSeCon Roundup: OT Top of Mind with Many

The RedSeal team recently attended the sold-out ISSA Triangle InfoSeCon in Raleigh. It was energizing to see and talk to so many people in person. People were excited to be at in-person events again (as were we!) and we had some great discussions at our booth on how RedSeal can help customers understand their environment and stay one step ahead of threat actors looking to exploit existing vulnerabilities. 

Visibility was a top topic at the booth, but I want to focus this blog on our panel discussion, titled ”OT: Still a Security Blindspot”, which, of course, has visibility as a core need. While I was expecting that this topic would be of interest to quite a few attendees, I was not expecting the great reception we received, nor the number of people that stayed behind to talk to us. We had so many great questions, that we have decided to host a webinar with the same title, where we will share some of the same information, and expand it to address some additional topics based on attendee interest. 

I want to highlight some key points we shared during the session:

We started the session talking about OT networks – what they are, and how they exist across all verticals in some way, shape or form (it is not just manufacturers!).  We did share a life sciences customer example to close the session (think about all of the devices in a hospital connected to the Internet – and what could happen if they were hacked!).

Then we got into the “risk” part of the discussion. We shared how OT networks are targeted via vulnerable devices and highlighted actual consequences from some real-life examples. We focused on the Mirai botnet and potential motives from threat actors:  

  • Botnets can simply to be annoying or attention getting, interrupting business 
  • Ransomware has traditionally been for cash or equivalent, but now can also be for deliberate intentional damage
  • Nation State – with espionage as a goal – to disrupt or compromise organizations or governments

Our panelists from Medigate by Claroty provided their perspective based on discussions with their customers. They explained that for many organizations OT was an afterthought when accessing security risks. The audience appeared to agree with this assessment. Unfortunately, this meant that OT has often been the quickest way to cause serious damage at an organization. They have seen this within the 1500+ hospitals they work with. They shared a couple of examples, which clearly demonstrated what can happen if you have vulnerable OT devices:

  • Turning off the air conditioning at a hospital in Phoenix AZ during the summer shut EVERYTHING down
  • Operating rooms must be kept at very specific temps and humidity levels. A customer in CA whose dedicated O.R. HVAC was impacted ended up losing $1M in revenue PER DAY while it was down

One of the reasons that the security risks for OT devices has not been addressed as well as they should is that OT devices have typically been managed by the Facilities organizations, who do not have the training and expertise needed for this task. We did spend some time talking about who “owns” managing and securing these OT devices. Luckily, there is growing awareness of the need for visibility into both OT and IT devices as part of an overall security strategy, and there are emerging solutions to address this need. 

We also spent time discussing how complex managing OT/IT becomes when companies have distributed sites or complex supply chains. The Medigate panelists shared how some of their Life Sciences customers have sites with different OT network topologies, and some even have a mismatch between the topology of the individual site and its production logic. This means there are usually multiple redundant, unmonitored connections at each site, which provides threat actors with numerous opportunities to penetrate the OT network and, once inside, to move laterally within it. 

This led to a conversation among the panelist on how to address the IT/OT visibility needs:

  1. First step: gain visibility into where OT is and then integrate it with existing IT security infrastructure 
  2. Ongoing: alignment and collaboration between and across IT and OT security, as well as with a range of third-party vendors, technicians, and contractors
  3. End goal: enable all teams (facilities, IT, security, networking, etc.) speak the same language from the same source of truth

The above is just a brief review of some of the topics covered during the panel discussion. If you are interested in hearing more about how to address IT/OT visibility needs and hear about how customers are addressing these needs or find out more about RedSeal, please visit our website or contact us today!

IT/OT Convergence

Operational Technology (OT) systems have decades of planning and experience to combat threats like natural disasters – forces of nature that can overwhelm the under-prepared, but which can be countered in advance using well thought out contingency plans. Converging IT with OT brings great efficiencies, but it also sets up a collision between the OT world and the ever-changing threats that are commonplace in the world of Information Technology. 

A Changing Threat Landscape 

The security, reliability, and integrity of the OT systems face a very different kind of threat now – not necessarily more devastating than, say, a flood along the Mississippi, or a hurricane along the coast – but more intelligent and malicious. Bad actors connected over IT infrastructure can start with moves like disabling your backup systems – something a natural disaster wouldn’t set out to do. Bad actors are not more powerful than Mother Nature, but they certainly are more cunning, and constantly create new attack techniques to get around all carefully planned defenses. This is why the traditional strategies have to change; the threat model is different, and the definition of what makes a system “reliable” has changed. 

In the OT world, you used to get the highest reliability using the oldest, most mature equipment that could stay the same, year after year, decade after decade. In the IT world, this is the worst possible situation – out of date electronics are the easiest targets to attack, with the most known vulnerabilities accumulated over time. In the IT world of the device where you are reading this, we have built up an impressive and agile security stack in response to these rapidly evolving threats, but it all depends on being able to install and patch whatever software changes we need as new Tactics, Techniques and Procedures (TTP’s) are invented. That is, in the IT world, rapid change and flexible software is essential to the security paradigm. 

Does this security paradigm translate well to the OT world?

Not really. It creates a perfect storm for those concerned with defending manufacturing, energy, chemical and related OT infrastructure. On the one hand, the OT machinery is built for stability and cannot deliver the “five nines” reliability it was designed for if components are constantly being changed. On the other hand, we have IT threats which can now reach into OT fabric as all the networks blend, but our defense mechanisms against such threats require exactly this rapid pace of updating to block the latest TTP’s! It’s a Catch-22 situation. 

The old answer to this was the air gap – keep OT networks away from IT, and you can evade much of the problem. (Stuxnet showed even this isn’t perfect protection – humans can still propagate a threat across an air gap if you trick them, and it turns out that this isn’t all that hard to do.) Today, the air gap is gone, due to the great economic efficiencies that come from adding modern digital communication pathways to everything we might need to manage remotely – the Internet of Things (IoT).

How do we solve this Catch-22 situation?

So, what can replace the old air gap? In a word, segmentation – it’s possible, even in complex, blended, IT/OT networks to keep data pathways separate, just as it’s essential for the same reason that we keep water pipes and sewer pipes separate when we build houses. The goal is to separate vulnerable and critical OT systems so that they can talk to each other and be managed remotely, but to open only these pathways, and not fall back to “open everything so that we can get the critical traffic through”. Thankfully, this goal is achievable, but the bad news is it’s error prone. Human operators are not good at maintaining complex firewall rules. When mistakes inevitably happen, they fall into two groups:

  1. errors that block something that is needed
  2. errors that leave something open

The first kind of error is immediately noticed, but sadly, the second kind is silent, and, unless you are doing something to automatically detect these errors and gaps, they will accumulate, making your critical OT fabric more and more fragile over time. 

One way to combat this problem is to have a second set of humans – the auditors – review the segmentation regularly. Experience shows, though, that this just propagates the problem – no human beings are good at understanding network interactions and reasoning about complex systems. This is, however, a great job for computers – given stated goals, computers can check all the interactions and complex rules in a converged, multi-vendor, multi-language infrastructure, and make sure only intended communication is allowed, no more and no less.

In summary, IT/OT convergence is inevitable, given the economic benefits, but it creates an ugly Catch-22 scenario for those responsible for security and reliability – it’s not possible to be both super-stable and agile at the same time. The answer is network segmentation, not the old air gapped approach. The trouble with segmentation is it’s hard for humans to manage, maintain and audit without gaps creeping in. Finally, the solution to resolve this Catch-22 is to apply automation – using software such as from RedSeal to automatically verify your segmentation and prevent the inevitable drift, so that OT networks are as prepared for a hacker assault as they are for a natural disaster. 

CNAPP: The Future of Cloud Security

The cloud has arrived. According to data from the Cloud Security Alliance (CSA), 89% of organizations now host sensitive data or workloads in the cloud. But increased use doesn’t necessarily mean better protection: 44% of companies feel “moderately” able to protect this data, and 33% say they’re only “slightly” confident in their defense.

With cloud networks growing exponentially, businesses need a new way to handle both existent and emerging threats. Cloud-native applications protection platforms (CNAPP) offer an integrated, end-to-end security approach that can help companies better manage current conditions and prepare for future attacks.

What is CNAPP?

As noted by research firm Gartner in their August 2021 Innovation Insight for Cloud-Native Application Protection Platforms report (paywall), CNAPP is “an integrated set of security and compliance capabilities designed to help secure and protect cloud-native applications across development and production.”

The goal of CNAPP solutions is to protect cloud-based applications across their entire lifecycle, from initial deployment and integration to regular use and maintenance to eventual end-of-life. Rather than taking a point-based approach to security that sees companies adopting multiple solutions which may (or may not) work in tandem to solve security issues, CNAPP looks to provide a single user interface and a single source of truth for all cloud-related security processes.

In effect, this approach prioritizes the centralization of cloud security processes to help companies better manage disparate applications and services.

Why Is Security in the Cloud so Challenging?

Effective security relies on effective attack path analysis – the categorization and protection of pathways. In a traditional infrastructure model, these pathways were relatively simple, stretching from internal resources to Internet applications and back.

Highways offer a simple analogy. Say that your resources are in San Francisco, California, and the Internet is in San Jose. Different highways offer different paths to the same destination. Installing checkpoints along these highways, meanwhile, makes it possible for companies to ensure that cars heading into San Francisco or back to San Jose have permission to do so. If they don’t, they’re not allowed to proceed.

The cloud significantly complicates this process by adding a host of new destinations and attack pathways, both on the ground and in the air. Where companies might have managed 50 potential points of compromise, in the cloud this number could be 5000 or 50,000 —and is constantly growing. Plus it is 100x easy to misconfigure the points of compromise.

As a result, there are both more vehicles traveling and more routes for them to travel, in turn making it 100x more complicated to see and secure the cloud. This in turn, increases the risk of traffic getting into or out of your network without the proper permissions, resulting in everything from lateral compromise to ransomware payloads to advanced persistent threats (APTs).

Clouds also create a challenge when it comes to third-party protection. While cloud-native applications are evolving to meet new enterprise requirements, well-known or specialized third-party solutions are often tapped for additional security controls or to provide enhanced functionality. In our traffic example, this means that different checkpoints are managed by different vendors that may not always speak the same language or use the same metrics. This means it’s possible for one of these checkpoints to report a false positive or negative, in turn putting your local cloud environment at risk.

How Can CNAPP Help Companies Address Cloud Security Challenges?

CNAPP solutions makes it possible to centralize security management for greater visibility and control. According to Gartner, this is accomplished via five key components:

  1. Infrastructure as Code (IAC) Scanning
    IAC scanning helps companies identify potential issues with distributed configurations across their network. This is especially critical as infrastructure provisioning becomes more and more automated. Without the ability to regularly scan for potential weak points, IAC becomes a potential liability.
  2. Container Scanning
    Containers are a critical part of cloud computing. By making it possible to package applications in a platform- and service-agnostic framework, it’s easy for companies to deploy new services without rebuilding code from the ground up. The caveat? Containers that have been compromised present serious risks. As a result, container scanning is critical.
  3. Cloud Workload Protection Platforms (CWPPs)
    CWPPs are designed to discover workloads within both cloud and on-premises infrastructure and then perform vulnerability assessments to determine if these workloads pose potential risks based on current policies and if any actions are required to remediate this risk.
  4. Cloud Infrastructure Entitlement Management (CIEM)
    CIEM tools help handle identity and access across the cloud. By automatically granting, revoking, and administering access to cloud services, the judicious application of CIEM solutions make it possible for companies to adopt a principle of least privilege approach to access.
  5. Cloud Security Posture Management (CSPM)
    CSPMs automate the process of identifying and remediating risk across IaaS, PaaS, and SaaS deployments in enterprise clouds. These tools provide the data-driven foundation for risk visualizations and assessments that empower effective incident response.

Working together, these solutions make it possible for companies to see what’s happening in their cloud network environments, when, and why, in turn allowing IT teams to prioritize alerts and take immediate action. Consider the RedSeal Stratus CNAPP solution, which provides companies with a “blueprint map” of their entire cloud framework to identify where resources are located and full attack path analysis to identify where they are exposed.

In the context of our highway example, RedSeal Stratus makes it possible to map every possible path and checkpoint taken, in addition to providing information about each exposed resource at risk in San Francisco and who can get to them within minutes. This makes it possible to assess the net effective reachability of all aspects of your cloud and pinpoint areas that require specific action.

What Comes Next for CNAPP?

Put simply, CNAPP is the future of cloud security, but it’s not a monolithic, one-size-fits-all solution. Given the rapidly-changing scope and nature of cloud services, CNAPP solutions won’t be one-vendor affairs but rather a consolidation of differing vendor specialties under a unified platform model that provides a single pane of glass visibility for users.

Moving forward, companies should expect an increasing focus on the data residing in the resources as the core component of CNAPP. This includes not only a focus on how they are accessible and permissions but on positively identifying where they’re located, what they’re doing, who is accessing them, risks and how they interact with other services and solutions both on-Premise and cloud.

CNAPP is coming of age. Make sure you’re ready for the next generation of cloud security with RedSeal

Cyber Insurance Isn’t Enough Anymore

The cyber insurance world has changed dramatically.

Premiums have risen significantly, and insurers are placing more limits on covered items. Industries like healthcare, retail, and government, where exposure is high, have been hit hard. Many organizations have seen huge rate increases for substantially less coverage than in the past. Others have seen their policies canceled or been unable to renew.

In many cases, insurers are offering half the coverage amounts at a higher cost. For example, some insurers that had previously issued $5 million liability policies have now reduced amounts to $1 million to $3 million while raising rates. Even with reduced coverage, some policy rates have risen by as much as 300%.

At the same time, insurers are leaving the field. Big payoffs in small risk pools can devastate profitability for insurers. Many insurers are reaching the break-even point where a single covered loss can wipe out years of profits. In fact, several major insurance companies have stopped issuing new cybersecurity insurance policies altogether.

This is in part to incidents like the recent Merck legal victory forcing a $1.4B payout due to the NotPetya’s malware attack. According to Fitch Ratings, more than 8,100 cyber insurance claims were paid out in 2021, the third straight year that claims increased by at least 100%. Payments from claims jumped 200% annually in 2019, 2020, and 2021 as well.

Claims are also being denied at higher rates. With such large amounts at stake, insurers are looking more closely at an organization’s policies and requiring proof that the organization is taking the right steps to protect itself. Companies need to be thinking about better ways to manage more of the cyber risks themselves. Cyber insurance isn’t enough anymore.

Dealing with Ransomware

At the heart of all of this drama is ransomware. The State of Ransomware 2022 report from Sophos includes some sobering statistics.

Ransomware attacks nearly doubled in 2021 vs. 2020, and ransom payments are higher as cybercriminals are demanding more money. In 2020, only 4% of organizations paid more than $1 million in ransoms. In 2021, that number jumped to 11%. The average ransomware paid by organizations in significant ransomware attacks grew by 500% last year to $812,360.

More companies are paying the ransom as well. Nearly half (46%) of companies hit by ransomware chose to pay despite FBI warnings not to do so. The FBI says paying ransoms encourages threat actors to target even more victims.

Even with cyber insurance, it can take months to fully recover from a ransomware attack and cause significant damage to a company’s reputation. Eighty-six percent (86%) of companies in the Sophos study said they lost business and revenue because of an attack. While 98% of cyber insurance claims were paid out, only four out of ten companies saw all of their costs paid.

There’s some evidence that cybercriminals are actively targeting organizations that have cyber insurance specifically because companies are more likely to pay. This has led to higher ransom demands, contributing to the cyber insurance crisis. At the same time, there’s been a significant increase in how cybercriminals are exacting payments.

Ransomware attackers are now often requiring two payments. The first is for providing the decryption key to unlock encrypted data. A demand for a separate payment is made to avoid releasing the data itself publicly. Threat actors are also hitting the same organizations more than once. When they know they’ll get paid, they often increase efforts to attack a company a second or third time until they lock down their security.

Protecting Yourself from Ransomware Attacks

Organizations must deploy strict guidelines and protocols for security and follow them to protect themselves. Even one small slip-up in following procedures can result in millions or even billions of dollars in losses and denied claims.

People, Processes, Tech, and Monitoring

The root cause of most breaches and ransomware attacks is a breakdown in processes, allowing an attack vector to be exploited. This breakdown often occurs because there is a lack of controls or adherence to these controls by the people using the network.

Whether organizations decide to pay the price for cyber insurance or not, they need to take proactive steps to ensure they have the right policies in place, have robust processes for managing control, and train their team members on how to protect organizational assets.

Organizations also need a skilled cybersecurity workforce to deploy and maintain protection along with the right tech tools.

Even with all of this in place, strong cybersecurity demands continuous monitoring and testing. Networks are rarely stable. New devices and endpoints are added constantly. New software, cloud services, and third-party solutions are deployed. With such fluidity, it’s important to continually identify potential security gaps and take proactive measures to harden your systems.

Identifying Potential Vulnerabilities

One of the first steps is understanding your entire network environment and potential vulnerabilities. For example, RedSeal’s cloud cybersecurity solution can create a real-time visualization of your network and continuously monitor your production environment and traffic. This provides a clear understanding of how data flows through your network to create a cyber risk model.

Users get a Digital Resilience Score which can be used to demonstrate their network’s security posture to cyber insurance providers.

This also helps organizations identify risk factors and compromised devices. Also, RedSeal provides a way to trace access throughout an entire network showing where an attacker can go once inside a network. This helps identify places where better segmentation is required to prevent unauthorized lateral movement.

In case an attack occurs, RedSeal accelerates incident responses by providing a more complete road map for containment.

Cyber Insurance Is Not Enough to Protect Your Bottom Line

With escalating activity and larger demands, cyber insurance is only likely to get more expensive and harder to get. Companies will also have to offer more proof about their security practices to be successful in filing claims or risk having claims denied.

For more information about how we can help you protect your network and mitigate the risks of successful cyber-attacks, contact RedSeal today.

The Unique Security Solution RedSeal Brings to Multi-Cloud and Hybrid Network Environments

One of the most significant benefits of implementing a multi-cloud strategy is the flexibility to use the right set of services to optimize opportunities and costs.

As public cloud service providers (CSPs) have evolved, they have started to excel in different areas. For example, programmers often prefer to use Azure because of its built-in development tools. However, they often want their apps to run in AWS to leverage the elastic cloud compute capability.

Adopting a multi-cloud strategy enables enterprises to benefit from this differentiation between providers and implement a “best of breed” model for the services that need to consume. They can also realize significant efficiencies, including cost-efficiency, by managing their cloud resources properly.

But multi-cloud solutions also bring their own challenges from administration to security. This can be especially challenging for organizations that don’t have deep experience and knowledge across all platforms and how they interconnect. It can sometimes seem like speaking a different language. For example, AWS has a term called VPC (virtual private cloud). Google Cloud Platform (GCP) uses that term, too but it means something different. In other cases, the reverse is true. The terminology is different but they do the same things.

Cloud provider solutions don’t always address the needs of hybrid multi-cloud deployments. Besides the terminology of AWS, Azure, GCP, Oracle’s OCI, IBM’s cloud, and others have different user interfaces. In a multi-cloud environment or hybrid environment, it can be far more difficult to secure than a single cloud.

Because of these challenges the need for a platform-independent solution that can understand all of the languages of each platform is needed to translate how your multi-cloud solutions are configured, interconnected, and help mitigate the risks.

How RedSeal Manages Multi-Cloud and Hybrid Cloud

At RedSeal, we provide the lingua franca (or bridge) for multi-cloud and on-premise networks. Security operations center (SOC) teams and DevOps get visibility into their entire network across vendors. RedSeal provides the roadmap for how the network looks and interconnects, so they can secure their entire IT infrastructure without having to be experts on every platform.

In most organizations using multi-cloud and hybrid cloud, however, network engineers and SOC teams are being asked to learn every cloud and on-prem resource and make sure they are all configured properly and secured. Many will deploy virtual cloud instances and use virtual firewalls, but as complexity rises, this becomes increasingly difficult to manage.

RedSeal is the only company that can monitor your connectivity across all of your platforms whether they are on-prem or in the cloud. This allows you to see network topology across all of your resources in one centralized platform.

Proactive Security

Proactive security is also complex. Most security offerings monitor in real-time to alert you when there’s an attack underway. That’s an important aspect of your security, but it also has a fundamental flaw. Once you recognize the problem, it’s already underway. It’s like calling 9-1-1 when you discover an emergency. Help is on the way, but the situation has already occurred.

Wouldn’t you like to know your security issues before an incident occurs?

RedSeal helps you identify potential security gaps in your network, so you can address them proactively. And, we can do it across your entire network.

Network Segmentation

Segmenting your network allows you to employ zero trust and application layer identity management to prevent lateral movement within your network. One of the most powerful things about RedSeal is that it provides the visibility you need to manage network segmentation.

It’s a simple concept, but it can also become incredibly complex — especially for larger companies.

If you’re a small business with 100 employees, segmentation may be easy. For example, you segment your CNC machine so employees don’t have admin rights to change configurations. In a mid-size or enterprise-level company, however, you can have an exponential number of connections and end-points. We’ve seen organizations with more than a million endpoints and connections that admins never even knew existed.

It’s only gotten more complex with distributed workforces, remote workers, hybrid work environments, and more third-party providers.

RedSeal can map it all and help you provide micro-segmentation for both east-west and north-south traffic.

Vulnerability Prioritization

Another area where RedSeal excels is by adding context to network vulnerability management. This allows you to perform true risk-based assessments and prioritization from your scanners. RedSeal calculates vulnerability risk scores that account for not only severity and asset value but also downstream risk based on the accessibility of vulnerable downstream assets.

In many cases, RedSeal uncovers downstream assets that organizations didn’t know were connected or vulnerable. These connections provided open threat surfaces, but never showed up in alert logs or only as low-to-medium risks. So, SOC teams already overwhelmed with managing critical and high-risk alerts may never get to these hidden connections. Yet, the potential damage from threat actors exploiting these connections could be even greater than what showed up as high risk.

RedSeal shows you the complete pictures and helps you prioritize vulnerabilities so you can focus on the highest risks in your unique environment.

Play at Your Best

In the late ’90s, world chess champion Garry Kasparov faced off against Deep Blue, an IBM supercomputer, in a six-game exhibition. Kasparov won the first match. Deep Blue won the second and the next three ended in draws. When Deep Blue won the final match and secured the overall victory, Kasparov was asked to concede that the best chess player in the world is now a computer.

Kasparov responded by saying that people were asking the wrong question. The question isn’t about whether the computer is better, but rather how do you play the best game of chess? Kasparov believes he lost not because the computer was better, but because he failed to perform at his best and see all of the gaps in his play.

You can’t afford to make mistakes in your security and beat yourself. By understanding your entire network infrastructure and identifying security gaps, you can take proactive measures to perform at your best.

RedSeal is the best move for a secure environment.

Learn more about how we can help protect your multi-cloud and hybrid cloud environments. Contact RedSeal today.

Zero Trust Network Access (ZTNA): Reducing Lateral Movement

In football, scoring a touchdown means moving the ball down the field. In most cases, forward motion starts the drive to the other team’s end zone. For example, the quarterback might throw to a receiver or handoff to a running back. Network attacks often follow a similar pattern: Malicious actors go straight for their intended target by evaluating the digital field of play and picking the route most likely to succeed.

In both cases, however, there’s another option: Lateral movement. Instead of heading directly for the goal, attackers move laterally to throw defenders off guard. In football, any player with the ball can pass parallel or back down the field to another player. In lateral cyberattacks, malicious actors gain access to systems on the periphery of business networks and then move “sideways” across software and services until they reach their target.

Zero trust network access (ZTNA) offers a way to frustrate lateral attack efforts. Here’s how.

What is Zero Trust Network Access?

Zero trust network access is rooted in the notion of “need to know” — a concept that has been around for decades. The idea is simple: Access and information are only provided to those who need it to complete specific tasks or perform specific actions.

The term “zero trust” refers to the fact that trust is earned by users rather than given. For example, instead of allowing a user access because they provide the correct username and password, they’re subject to additional checks which verify their identity and earn the trust of access. The checks might include two-factor authentication, the type of device used for access, or the user’s location. Even once identity has been confirmed, further checks are conducted to ensure users have permission to access the resource or service they’re requesting.

As a result, the term “zero trust” is somewhat misleading. While catchy, it’s functionally a combination of two concepts: Least privilege and segmentation. Least privilege sees users given the minimum privilege necessary to complete assigned tasks, while segmentation focuses on creating multiple digital “compartments” within their network. That way, even if attackers gain lateral access, only a small section of the network is compromised.

Adoption of ZTNA is on the rise, with 96 percent of security decision-makers surveyed saying that zero trust is critical for organizational success. Recent predictions also suggest that by 2023 60 percent of enterprises will phase out their remote access virtual private networks (VPNs) and replace them with ZTNA frameworks.

The Fundamentals of ZTNA-Based Architecture

While the specifics of a ZTNA deployment will look different for every business, there are five fundamental functions of zero-trust network access:

1. Micro-segmentation: By defining networks into multiple zones, companies can create fine-grained and flexible security policies for each. While segments can still “talk” to each other across the network, access requirements vary based on the type of services or data they contain. This approach reduces the ability of attackers to move laterally — even if they gain network access, they’re effectively trapped in their current segment.

2. Mandatory encryption: By encrypting all communications and network traffic, it’s possible to reduce the potential for malicious interference. Since they can’t see what’s going on inside business networks simply by eavesdropping, the scope and scale of their attacks are naturally limited.

3. The principle of least privilege: By ensuring that all users have only the minimum privilege required to do their job, evaluating users’ current permission level every time they attempt to access a system, application, or device, and removing unneeded permissions when tasks are complete, companies can ensure that a compromised user or system will not lead to complete network access.

4. Total control: By continually collecting data about potential security events, user behaviors, and the current state of infrastructure components, companies can respond ASAP when security incidents occur.

5. Application-level security: By segmenting applications within larger networks, organizations can deploy application-level security controls that effectively frustrate attacker efforts to move beyond the confines of their initial compromise point.

Best Practices to Tackle Risk with ZTNA

When it comes to network security and lateral compromise, businesses and attackers are playing by the same rules, but in many cases, malicious actors are playing in a different league. To follow our football analogy, it’s as if security teams are playing at a high-school level while attackers are in the NFL. While the plays and the objectives are the same, one team has a distinct advantage in terms of size, speed, and skill.

ZTNA can help level the playing field — if it’s correctly implemented. Here are three best practices to make it work:

1. Implement Automation

Knowing what to segment and where to create segmentation boundaries requires a complete inventory of all desktops, laptops, mobile devices, servers, ports, and protocols on your network. Since this inventory is constantly changing as companies add new cloud-based services, collecting key data is no easy task. Manual processes could take six months or more, leaving IT teams with out-of-date inventories.

Automating inventory proceeds can help businesses create a functional model of their current network that is constantly updated to reflect changes, allowing teams to define effective ZTNA micro-segmentations.

2. Prioritize Proactive Response

Many businesses now prioritize the collection of “real-time” data. The problem? Seeing security event data in real-time means that incidents have already happened. By capturing complete network visibility, companies can prioritize proactive responses that limit overall risk rather than requiring remediation after the fact.

3. Adapt Access as Required

Security isn’t static. Network configurations change and evolve, meaning that ZTNA must evolve in turn. Bolstered by dynamic visibility from RedSeal, businesses can see where lateral compromise poses risk, where segmentation is working to prevent access, and where changes are necessary to improve network security.

Solving for Sideways Security

Security is a zero-sum game: If attackers win, companies lose. But the reverse is also true. If businesses can prevent malicious actors from gaining lateral access to key software or systems, they come out ahead. The challenge? One-off wins aren’t enough; businesses need consistent control over network access to reduce their total risk.

ZTNA can help reduce the sideways security risks by minimizing available privilege and maximizing network segmentation to keep attackers away from high-value data end zones and instead force functional turnovers to network security teams.

Download our Zero Trust Guide today to get started.