Tag Archive for: Segmentation

Zero Trust Network Access (ZTNA): Reducing Lateral Movement

In football, scoring a touchdown means moving the ball down the field. In most cases, forward motion starts the drive to the other team’s end zone. For example, the quarterback might throw to a receiver or handoff to a running back. Network attacks often follow a similar pattern: Malicious actors go straight for their intended target by evaluating the digital field of play and picking the route most likely to succeed.

In both cases, however, there’s another option: Lateral movement. Instead of heading directly for the goal, attackers move laterally to throw defenders off guard. In football, any player with the ball can pass parallel or back down the field to another player. In lateral cyberattacks, malicious actors gain access to systems on the periphery of business networks and then move “sideways” across software and services until they reach their target.

Zero trust network access (ZTNA) offers a way to frustrate lateral attack efforts. Here’s how.

What is Zero Trust Network Access?

Zero trust network access is rooted in the notion of “need to know” — a concept that has been around for decades. The idea is simple: Access and information are only provided to those who need it to complete specific tasks or perform specific actions.

The term “zero trust” refers to the fact that trust is earned by users rather than given. For example, instead of allowing a user access because they provide the correct username and password, they’re subject to additional checks which verify their identity and earn the trust of access. The checks might include two-factor authentication, the type of device used for access, or the user’s location. Even once identity has been confirmed, further checks are conducted to ensure users have permission to access the resource or service they’re requesting.

As a result, the term “zero trust” is somewhat misleading. While catchy, it’s functionally a combination of two concepts: Least privilege and segmentation. Least privilege sees users given the minimum privilege necessary to complete assigned tasks, while segmentation focuses on creating multiple digital “compartments” within their network. That way, even if attackers gain lateral access, only a small section of the network is compromised.

Adoption of ZTNA is on the rise, with 96 percent of security decision-makers surveyed saying that zero trust is critical for organizational success. Recent predictions also suggest that by 2023 60 percent of enterprises will phase out their remote access virtual private networks (VPNs) and replace them with ZTNA frameworks.

The Fundamentals of ZTNA-Based Architecture

While the specifics of a ZTNA deployment will look different for every business, there are five fundamental functions of zero-trust network access:

1. Micro-segmentation: By defining networks into multiple zones, companies can create fine-grained and flexible security policies for each. While segments can still “talk” to each other across the network, access requirements vary based on the type of services or data they contain. This approach reduces the ability of attackers to move laterally — even if they gain network access, they’re effectively trapped in their current segment.

2. Mandatory encryption: By encrypting all communications and network traffic, it’s possible to reduce the potential for malicious interference. Since they can’t see what’s going on inside business networks simply by eavesdropping, the scope and scale of their attacks are naturally limited.

3. The principle of least privilege: By ensuring that all users have only the minimum privilege required to do their job, evaluating users’ current permission level every time they attempt to access a system, application, or device, and removing unneeded permissions when tasks are complete, companies can ensure that a compromised user or system will not lead to complete network access.

4. Total control: By continually collecting data about potential security events, user behaviors, and the current state of infrastructure components, companies can respond ASAP when security incidents occur.

5. Application-level security: By segmenting applications within larger networks, organizations can deploy application-level security controls that effectively frustrate attacker efforts to move beyond the confines of their initial compromise point.

Best Practices to Tackle Risk with ZTNA

When it comes to network security and lateral compromise, businesses and attackers are playing by the same rules, but in many cases, malicious actors are playing in a different league. To follow our football analogy, it’s as if security teams are playing at a high-school level while attackers are in the NFL. While the plays and the objectives are the same, one team has a distinct advantage in terms of size, speed, and skill.

ZTNA can help level the playing field — if it’s correctly implemented. Here are three best practices to make it work:

1. Implement Automation

Knowing what to segment and where to create segmentation boundaries requires a complete inventory of all desktops, laptops, mobile devices, servers, ports, and protocols on your network. Since this inventory is constantly changing as companies add new cloud-based services, collecting key data is no easy task. Manual processes could take six months or more, leaving IT teams with out-of-date inventories.

Automating inventory proceeds can help businesses create a functional model of their current network that is constantly updated to reflect changes, allowing teams to define effective ZTNA micro-segmentations.

2. Prioritize Proactive Response

Many businesses now prioritize the collection of “real-time” data. The problem? Seeing security event data in real-time means that incidents have already happened. By capturing complete network visibility, companies can prioritize proactive responses that limit overall risk rather than requiring remediation after the fact.

3. Adapt Access as Required

Security isn’t static. Network configurations change and evolve, meaning that ZTNA must evolve in turn. Bolstered by dynamic visibility from RedSeal, businesses can see where lateral compromise poses risk, where segmentation is working to prevent access, and where changes are necessary to improve network security.

Solving for Sideways Security

Security is a zero-sum game: If attackers win, companies lose. But the reverse is also true. If businesses can prevent malicious actors from gaining lateral access to key software or systems, they come out ahead. The challenge? One-off wins aren’t enough; businesses need consistent control over network access to reduce their total risk.

ZTNA can help reduce the sideways security risks by minimizing available privilege and maximizing network segmentation to keep attackers away from high-value data end zones and instead force functional turnovers to network security teams.

Download our Zero Trust Guide today to get started.

Improving Cloud Security With Segmentation And Automation

Forbes | February 12, 2021

by  Mike Lloyd

As a security professional, I tried for several years to keep IoT devices out of my house. However, my anti-IoT crusade just isn’t working anymore. Why? Because, as I’ve discovered, you really have to go to extreme measures to find non-IoT devices for your home. Whether it’s an irrigation system for your lawn, a new alarm system or even solar panels for your roof, just about every home accessory now comes with a prominent IoT footprint.

Network Segmentation, Security and RedSeal

Over the last few decades, many network security architecture products have come to market, all with useful features to help secure networks. If we assume that all of these security products are deployed in operational networks, why do we still see so many leaks and breaches?

Some say the users are not leveraging the full capabilities of these products – which is true.

Other say the users are not fully trained on how to use the product. Also true, and probably why they’re not using the full capabilities of their products.

Instead, we might benefit from remembering a basic truism: We humans are lazy.

Most of us, if offered a button that simply says “fix,” will convince ourselves that it will fix any network problem. We’ll buy that button every day of the week.

Our belief in fix buttons has led to a situation where many of us aren’t following standard security practices to secure our networks. When a network is designed or when you inherit a network, there are some basic things that should be done.

One of the first things to do is isolate, or segment, your network.  Back in the 1990s, network segmentation was done more for performance reasons than security. As we moved from hubs to large, switched networks, our networks have become flat, with less segmentation. Today, once attackers get in, they can run rampant through a whole enterprise.

If we take the time to say, “Let’s step back a second,” and group our systems based on access needed we can avoid much trouble. For instance, a web server most likely will need access to the internet and should be on a separate network segment, while a workstation should be in another segment, printers in another, IoT in one of its own, and so on.

This segmentation allows better control and visibility. If it’s thought out well enough, network segmentation can even reduce the number of network monitoring security products you need to deploy. You can consolidate them at network choke points that control the flow of data between segments versus having to deploy them across an entire flat architecture. This also will help you recognize what network traffic should and should not be flowing to certain segments based on that network segment’s purpose.

This all seems to make sense, so why isn’t it done?  In practice, network segmentation is usually implemented at the start. But, business happens, outages happen, administrators and network engineers are under enormous pressure to implement and fix things every day. All of this causes the network design to drift out of compliance. This drift can happen slowly or astonishingly fast. And, changes may not get documented. Personnel responsible for making the changes always intend to document things “tomorrow,” but tomorrow another event happens that takes priority over documentation.

Network segmentation only works if you can continuously ensure that it’s actually in place and working as intended. It is usually the security teams that have to verify it. But, as we all know, most security and networking teams do not always have the best partnerships. The network team is busy providing availability and rarely has the time to go back and ensure security is functioning.

Even if the security teams are checking segmentation in large enterprises, it is a herculean effort. As a result, validating network segmentation is done only yearly, at best. We can see how automating the inspection of the network security architecture is a clear benefit.

RedSeal enables an automated, comprehensive, continuous inspection of your network architecture. RedSeal understands and improves the resilience of every element, segment, and enclave of your network. RedSeal works with your existing security stack and network infrastructure (including cloud and SDN) to automatically and continuously visualize a logical model of your “as-built” network.

RedSeal’s network modeling and risk scoring platform enables enterprise networks to be resilient to cyber events and network interruptions in an increasingly digital and virtualized world, and to overcome one of the main enemies of cybersecurity – human nature.

Micro-Segmentation: Good or Bad?

There’s a lot going on in virtual data centers. In security, we’re hearing many variations of the term “micro-segmentation.” (It originated from VMWare, but has been adopted by other players, some of them adding top-spin or over-spin.)

We know what segmentation is. Every enterprise network practices segmentation between outside and inside, at least. Most aim to have a degree of internal segmentation, but I see a lot more planning than doing — unless an audit is on the line. Many networks have a degree of segmentation around the assets that auditors pay attention to, such as patient records and credit cards. There are organizations further up the security sophistication curve who have a solid zone-based division of their business, can articulate what each zone does and what should go on between them, and have a degree – at least some degree – of enforcement of inter-zone access. But these tend to be large, complex companies, so each zone tends to be quite large. It’s simple math – if you try to track N zones, you have to think about N2 different relationships. That number goes up fast. Even well-staffed teams struggle to keep up with just a dozen major zones in a single network. That may not sound like a lot, but the typical access open between any two zones can easily exceed half a million communicating pairs. Auditing even one of those in full depth is a super-human feat.

Now along comes the two horses pulling today’s IT chariot: the virtual data center and the software defined network. These offer more segmentation, with finer control, all the way down to the workload (or even lower, depending on which marketing teams you believe). This sounds great – who wouldn’t want super-fine controls?  Nobody believes the perimeter-only model is working out any more, so more control must be better, right?  But in practice, if you just throw this technology onto the existing stack without a plan for scaling, it’s not going to work out.

If you start with a hard-to-manage, complex management challenge, and you respond by breaking it into ever smaller pieces, spread out in more places, you can rapidly end up like Mickey Mouse in The Sorcerer’s Apprentice, madly splitting brooms until he’s overrun.

Is it hopeless? Certainly not. The issue is scale. More segmentation, in faster-moving infrastructure, takes a problem that was already tough for human teams and makes it harder. But this happens to be the kind of problem that computers are very good at. The trick is to realize that you need to separate the objective – what you want to allow in your network – from the implementation, whether that’s a legacy firewall or a fancy new GUI for managing policy for virtual workloads. (In the real world, that’s not an either/or – it’s a both, since you have to coordinate your virtual workload protections with your wider network, which stubbornly refuses to go away just because it’s not software defined.)

That is, if you can describe what you want your network to do, you can get a big win.  Just separate your goals from the specific implementation – record the intention in general terms, for example, in the zone-to-zone relationships of the units of your business. Then you can use automation software to check that this is actually what the network is set up to do.  Computers don’t get tired – they just don’t know enough about your business or your adversaries to write the rules for you. (I wouldn’t trust software to figure out how an organism like a business works, and I certainly wouldn’t expect it to out-fox an adversary. If we can’t even make software to beat a Turing Test, how could an algorithm understand social engineering – still a mainstay of modern malware?)

So I’m not saying micro-segmentation is a bad thing. That’s a bit like asking whether water is a bad thing – used correctly, it’s great, but it’s important not to drown. Here, learning to swim isn’t about the latest silver bullet feature of a competitive security offering – it’s about figuring out how all your infrastructure works together, and whether it’s giving the business what’s needed without exposing too much attack surface.