Tag Archive for: Network Security

Tales from the Trenches: Vol 1 — In security, consistency is key.

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

The first in this series is by Nate Cash, Senior Director, Federal Professional Services/ Director of Information Security at RedSeal.

In security, consistency is key.

Once at a customer site while going through the install of RedSeal, we were going over the hardening standards. I clicked on a couple of configurations to start showing how we could go about setting up best practice checks. I had inadvertently pulled up a device which has not been updated in over five years. The customer was shocked, this is one of the many times where I have had to stop mid-sentence while the person I worked with, reached out to someone to “fix the problem.”

The problem is not the fact the device is had not been updated, but somehow their process missed it. This device was just one of many. The first thing we did with RedSeal was develop a set of custom checks to see how many devices passed or failed the latest hardening standard. Once set we started data collections. In 15 min we saw 30% of the devices were not running their latest hardening standards. 

30% of their network devices were not using the latest encryption, had management ACLs set to an old subnet which was now known as their guest subnet, inconsistent SSH versions, telnet still enabled, and some devices pointed to old radius servers, falling back to local accounts. Luckily their firewall blocked the guest subnet from anything internal but, their network management tool still couldn’t access 100% of the devices.

Fixing the underlying problem, not the result of the problem.

With the latest hardening standards in hand, the network engineers got an emergency change request and started logging in to update their configurations. Each device required firmware upgrades, followed by configuration changes. While this was a big undertaking, each day that passed we could grab the numbers and see trending on remediation and report to management.

Once we got the model complete, we found a couple of firewalls that blocked their network management solution from accessing 100% of their network devices. Once the customer opened their firewall, the networking team could start pushing the config to all of the devices. 

As luck would have it, the customer was currently undergoing a review of a new hardening standard. We took the new standard, and I showed them how to create checks for each of the configuration points. At the end of the day, the customer had 75 individual checks for each of the network devices. Upon data collection RedSeal will run those checks against each of the configurations automatically and we could ensure that all network devices passed all of checks required for their baseline configuration. 

This customer had a unique process where devices about to be deployed were plugged into the network and received a static DHCP address. Their network management tool would push the baseline to the config, then the engineers would login the next day to assign the interface Ips per the documentation. The rest of the hardened config was automagically configured by the tool. With this in hand, we were able to add the ‘staging area’ to RedSeal. With every data collection we noticed random inconsistencies where some device would get the whole config and others did not, using those same checks. 

Using RedSeal and custom checks this customer is able to push configs, then double check the config took properly before deploying out into their network. They had better visibility ensuring that all of their devices were hardened, were more confident that automating the work were consistently driving the results they wanted, had a double check with the automation, and essentially reduced their risk significantly, just by checking the configs of their network devices.

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.  

InfoSeCon Roundup: OT Top of Mind with Many

The RedSeal team recently attended the sold-out ISSA Triangle InfoSeCon in Raleigh. It was energizing to see and talk to so many people in person. People were excited to be at in-person events again (as were we!) and we had some great discussions at our booth on how RedSeal can help customers understand their environment and stay one step ahead of threat actors looking to exploit existing vulnerabilities. 

Visibility was a top topic at the booth, but I want to focus this blog on our panel discussion, titled ”OT: Still a Security Blindspot”, which, of course, has visibility as a core need. While I was expecting that this topic would be of interest to quite a few attendees, I was not expecting the great reception we received, nor the number of people that stayed behind to talk to us. We had so many great questions, that we have decided to host a webinar with the same title, where we will share some of the same information, and expand it to address some additional topics based on attendee interest. 

I want to highlight some key points we shared during the session:

We started the session talking about OT networks – what they are, and how they exist across all verticals in some way, shape or form (it is not just manufacturers!).  We did share a life sciences customer example to close the session (think about all of the devices in a hospital connected to the Internet – and what could happen if they were hacked!).

Then we got into the “risk” part of the discussion. We shared how OT networks are targeted via vulnerable devices and highlighted actual consequences from some real-life examples. We focused on the Mirai botnet and potential motives from threat actors:  

  • Botnets can simply to be annoying or attention getting, interrupting business 
  • Ransomware has traditionally been for cash or equivalent, but now can also be for deliberate intentional damage
  • Nation State – with espionage as a goal – to disrupt or compromise organizations or governments

Our panelists from Medigate by Claroty provided their perspective based on discussions with their customers. They explained that for many organizations OT was an afterthought when accessing security risks. The audience appeared to agree with this assessment. Unfortunately, this meant that OT has often been the quickest way to cause serious damage at an organization. They have seen this within the 1500+ hospitals they work with. They shared a couple of examples, which clearly demonstrated what can happen if you have vulnerable OT devices:

  • Turning off the air conditioning at a hospital in Phoenix AZ during the summer shut EVERYTHING down
  • Operating rooms must be kept at very specific temps and humidity levels. A customer in CA whose dedicated O.R. HVAC was impacted ended up losing $1M in revenue PER DAY while it was down

One of the reasons that the security risks for OT devices has not been addressed as well as they should is that OT devices have typically been managed by the Facilities organizations, who do not have the training and expertise needed for this task. We did spend some time talking about who “owns” managing and securing these OT devices. Luckily, there is growing awareness of the need for visibility into both OT and IT devices as part of an overall security strategy, and there are emerging solutions to address this need. 

We also spent time discussing how complex managing OT/IT becomes when companies have distributed sites or complex supply chains. The Medigate panelists shared how some of their Life Sciences customers have sites with different OT network topologies, and some even have a mismatch between the topology of the individual site and its production logic. This means there are usually multiple redundant, unmonitored connections at each site, which provides threat actors with numerous opportunities to penetrate the OT network and, once inside, to move laterally within it. 

This led to a conversation among the panelist on how to address the IT/OT visibility needs:

  1. First step: gain visibility into where OT is and then integrate it with existing IT security infrastructure 
  2. Ongoing: alignment and collaboration between and across IT and OT security, as well as with a range of third-party vendors, technicians, and contractors
  3. End goal: enable all teams (facilities, IT, security, networking, etc.) speak the same language from the same source of truth

The above is just a brief review of some of the topics covered during the panel discussion. If you are interested in hearing more about how to address IT/OT visibility needs and hear about how customers are addressing these needs or find out more about RedSeal, please visit our website or contact us today!

IT/OT Convergence

Operational Technology (OT) systems have decades of planning and experience to combat threats like natural disasters – forces of nature that can overwhelm the under-prepared, but which can be countered in advance using well thought out contingency plans. Converging IT with OT brings great efficiencies, but it also sets up a collision between the OT world and the ever-changing threats that are commonplace in the world of Information Technology. 

A Changing Threat Landscape 

The security, reliability, and integrity of the OT systems face a very different kind of threat now – not necessarily more devastating than, say, a flood along the Mississippi, or a hurricane along the coast – but more intelligent and malicious. Bad actors connected over IT infrastructure can start with moves like disabling your backup systems – something a natural disaster wouldn’t set out to do. Bad actors are not more powerful than Mother Nature, but they certainly are more cunning, and constantly create new attack techniques to get around all carefully planned defenses. This is why the traditional strategies have to change; the threat model is different, and the definition of what makes a system “reliable” has changed. 

In the OT world, you used to get the highest reliability using the oldest, most mature equipment that could stay the same, year after year, decade after decade. In the IT world, this is the worst possible situation – out of date electronics are the easiest targets to attack, with the most known vulnerabilities accumulated over time. In the IT world of the device where you are reading this, we have built up an impressive and agile security stack in response to these rapidly evolving threats, but it all depends on being able to install and patch whatever software changes we need as new Tactics, Techniques and Procedures (TTP’s) are invented. That is, in the IT world, rapid change and flexible software is essential to the security paradigm. 

Does this security paradigm translate well to the OT world?

Not really. It creates a perfect storm for those concerned with defending manufacturing, energy, chemical and related OT infrastructure. On the one hand, the OT machinery is built for stability and cannot deliver the “five nines” reliability it was designed for if components are constantly being changed. On the other hand, we have IT threats which can now reach into OT fabric as all the networks blend, but our defense mechanisms against such threats require exactly this rapid pace of updating to block the latest TTP’s! It’s a Catch-22 situation. 

The old answer to this was the air gap – keep OT networks away from IT, and you can evade much of the problem. (Stuxnet showed even this isn’t perfect protection – humans can still propagate a threat across an air gap if you trick them, and it turns out that this isn’t all that hard to do.) Today, the air gap is gone, due to the great economic efficiencies that come from adding modern digital communication pathways to everything we might need to manage remotely – the Internet of Things (IoT).

How do we solve this Catch-22 situation?

So, what can replace the old air gap? In a word, segmentation – it’s possible, even in complex, blended, IT/OT networks to keep data pathways separate, just as it’s essential for the same reason that we keep water pipes and sewer pipes separate when we build houses. The goal is to separate vulnerable and critical OT systems so that they can talk to each other and be managed remotely, but to open only these pathways, and not fall back to “open everything so that we can get the critical traffic through”. Thankfully, this goal is achievable, but the bad news is it’s error prone. Human operators are not good at maintaining complex firewall rules. When mistakes inevitably happen, they fall into two groups:

  1. errors that block something that is needed
  2. errors that leave something open

The first kind of error is immediately noticed, but sadly, the second kind is silent, and, unless you are doing something to automatically detect these errors and gaps, they will accumulate, making your critical OT fabric more and more fragile over time. 

One way to combat this problem is to have a second set of humans – the auditors – review the segmentation regularly. Experience shows, though, that this just propagates the problem – no human beings are good at understanding network interactions and reasoning about complex systems. This is, however, a great job for computers – given stated goals, computers can check all the interactions and complex rules in a converged, multi-vendor, multi-language infrastructure, and make sure only intended communication is allowed, no more and no less.

In summary, IT/OT convergence is inevitable, given the economic benefits, but it creates an ugly Catch-22 scenario for those responsible for security and reliability – it’s not possible to be both super-stable and agile at the same time. The answer is network segmentation, not the old air gapped approach. The trouble with segmentation is it’s hard for humans to manage, maintain and audit without gaps creeping in. Finally, the solution to resolve this Catch-22 is to apply automation – using software such as from RedSeal to automatically verify your segmentation and prevent the inevitable drift, so that OT networks are as prepared for a hacker assault as they are for a natural disaster. 

Cyber Insurance Isn’t Enough Anymore

The cyber insurance world has changed dramatically.

Premiums have risen significantly, and insurers are placing more limits on covered items. Industries like healthcare, retail, and government, where exposure is high, have been hit hard. Many organizations have seen huge rate increases for substantially less coverage than in the past. Others have seen their policies canceled or been unable to renew.

In many cases, insurers are offering half the coverage amounts at a higher cost. For example, some insurers that had previously issued $5 million liability policies have now reduced amounts to $1 million to $3 million while raising rates. Even with reduced coverage, some policy rates have risen by as much as 300%.

At the same time, insurers are leaving the field. Big payoffs in small risk pools can devastate profitability for insurers. Many insurers are reaching the break-even point where a single covered loss can wipe out years of profits. In fact, several major insurance companies have stopped issuing new cybersecurity insurance policies altogether.

This is in part to incidents like the recent Merck legal victory forcing a $1.4B payout due to the NotPetya’s malware attack. According to Fitch Ratings, more than 8,100 cyber insurance claims were paid out in 2021, the third straight year that claims increased by at least 100%. Payments from claims jumped 200% annually in 2019, 2020, and 2021 as well.

Claims are also being denied at higher rates. With such large amounts at stake, insurers are looking more closely at an organization’s policies and requiring proof that the organization is taking the right steps to protect itself. Companies need to be thinking about better ways to manage more of the cyber risks themselves. Cyber insurance isn’t enough anymore.

Dealing with Ransomware

At the heart of all of this drama is ransomware. The State of Ransomware 2022 report from Sophos includes some sobering statistics.

Ransomware attacks nearly doubled in 2021 vs. 2020, and ransom payments are higher as cybercriminals are demanding more money. In 2020, only 4% of organizations paid more than $1 million in ransoms. In 2021, that number jumped to 11%. The average ransomware paid by organizations in significant ransomware attacks grew by 500% last year to $812,360.

More companies are paying the ransom as well. Nearly half (46%) of companies hit by ransomware chose to pay despite FBI warnings not to do so. The FBI says paying ransoms encourages threat actors to target even more victims.

Even with cyber insurance, it can take months to fully recover from a ransomware attack and cause significant damage to a company’s reputation. Eighty-six percent (86%) of companies in the Sophos study said they lost business and revenue because of an attack. While 98% of cyber insurance claims were paid out, only four out of ten companies saw all of their costs paid.

There’s some evidence that cybercriminals are actively targeting organizations that have cyber insurance specifically because companies are more likely to pay. This has led to higher ransom demands, contributing to the cyber insurance crisis. At the same time, there’s been a significant increase in how cybercriminals are exacting payments.

Ransomware attackers are now often requiring two payments. The first is for providing the decryption key to unlock encrypted data. A demand for a separate payment is made to avoid releasing the data itself publicly. Threat actors are also hitting the same organizations more than once. When they know they’ll get paid, they often increase efforts to attack a company a second or third time until they lock down their security.

Protecting Yourself from Ransomware Attacks

Organizations must deploy strict guidelines and protocols for security and follow them to protect themselves. Even one small slip-up in following procedures can result in millions or even billions of dollars in losses and denied claims.

People, Processes, Tech, and Monitoring

The root cause of most breaches and ransomware attacks is a breakdown in processes, allowing an attack vector to be exploited. This breakdown often occurs because there is a lack of controls or adherence to these controls by the people using the network.

Whether organizations decide to pay the price for cyber insurance or not, they need to take proactive steps to ensure they have the right policies in place, have robust processes for managing control, and train their team members on how to protect organizational assets.

Organizations also need a skilled cybersecurity workforce to deploy and maintain protection along with the right tech tools.

Even with all of this in place, strong cybersecurity demands continuous monitoring and testing. Networks are rarely stable. New devices and endpoints are added constantly. New software, cloud services, and third-party solutions are deployed. With such fluidity, it’s important to continually identify potential security gaps and take proactive measures to harden your systems.

Identifying Potential Vulnerabilities

One of the first steps is understanding your entire network environment and potential vulnerabilities. For example, RedSeal’s cloud cybersecurity solution can create a real-time visualization of your network and continuously monitor your production environment and traffic. This provides a clear understanding of how data flows through your network to create a cyber risk model.

Users get a Digital Resilience Score which can be used to demonstrate their network’s security posture to cyber insurance providers.

This also helps organizations identify risk factors and compromised devices. Also, RedSeal provides a way to trace access throughout an entire network showing where an attacker can go once inside a network. This helps identify places where better segmentation is required to prevent unauthorized lateral movement.

In case an attack occurs, RedSeal accelerates incident responses by providing a more complete road map for containment.

Cyber Insurance Is Not Enough to Protect Your Bottom Line

With escalating activity and larger demands, cyber insurance is only likely to get more expensive and harder to get. Companies will also have to offer more proof about their security practices to be successful in filing claims or risk having claims denied.

For more information about how we can help you protect your network and mitigate the risks of successful cyber-attacks, contact RedSeal today.

The Unique Security Solution RedSeal Brings to Multi-Cloud and Hybrid Network Environments

One of the most significant benefits of implementing a multi-cloud strategy is the flexibility to use the right set of services to optimize opportunities and costs.

As public cloud service providers (CSPs) have evolved, they have started to excel in different areas. For example, programmers often prefer to use Azure because of its built-in development tools. However, they often want their apps to run in AWS to leverage the elastic cloud compute capability.

Adopting a multi-cloud strategy enables enterprises to benefit from this differentiation between providers and implement a “best of breed” model for the services that need to consume. They can also realize significant efficiencies, including cost-efficiency, by managing their cloud resources properly.

But multi-cloud solutions also bring their own challenges from administration to security. This can be especially challenging for organizations that don’t have deep experience and knowledge across all platforms and how they interconnect. It can sometimes seem like speaking a different language. For example, AWS has a term called VPC (virtual private cloud). Google Cloud Platform (GCP) uses that term, too but it means something different. In other cases, the reverse is true. The terminology is different but they do the same things.

Cloud provider solutions don’t always address the needs of hybrid multi-cloud deployments. Besides the terminology of AWS, Azure, GCP, Oracle’s OCI, IBM’s cloud, and others have different user interfaces. In a multi-cloud environment or hybrid environment, it can be far more difficult to secure than a single cloud.

Because of these challenges the need for a platform-independent solution that can understand all of the languages of each platform is needed to translate how your multi-cloud solutions are configured, interconnected, and help mitigate the risks.

How RedSeal Manages Multi-Cloud and Hybrid Cloud

At RedSeal, we provide the lingua franca (or bridge) for multi-cloud and on-premise networks. Security operations center (SOC) teams and DevOps get visibility into their entire network across vendors. RedSeal provides the roadmap for how the network looks and interconnects, so they can secure their entire IT infrastructure without having to be experts on every platform.

In most organizations using multi-cloud and hybrid cloud, however, network engineers and SOC teams are being asked to learn every cloud and on-prem resource and make sure they are all configured properly and secured. Many will deploy virtual cloud instances and use virtual firewalls, but as complexity rises, this becomes increasingly difficult to manage.

RedSeal is the only company that can monitor your connectivity across all of your platforms whether they are on-prem or in the cloud. This allows you to see network topology across all of your resources in one centralized platform.

Proactive Security

Proactive security is also complex. Most security offerings monitor in real-time to alert you when there’s an attack underway. That’s an important aspect of your security, but it also has a fundamental flaw. Once you recognize the problem, it’s already underway. It’s like calling 9-1-1 when you discover an emergency. Help is on the way, but the situation has already occurred.

Wouldn’t you like to know your security issues before an incident occurs?

RedSeal helps you identify potential security gaps in your network, so you can address them proactively. And, we can do it across your entire network.

Network Segmentation

Segmenting your network allows you to employ zero trust and application layer identity management to prevent lateral movement within your network. One of the most powerful things about RedSeal is that it provides the visibility you need to manage network segmentation.

It’s a simple concept, but it can also become incredibly complex — especially for larger companies.

If you’re a small business with 100 employees, segmentation may be easy. For example, you segment your CNC machine so employees don’t have admin rights to change configurations. In a mid-size or enterprise-level company, however, you can have an exponential number of connections and end-points. We’ve seen organizations with more than a million endpoints and connections that admins never even knew existed.

It’s only gotten more complex with distributed workforces, remote workers, hybrid work environments, and more third-party providers.

RedSeal can map it all and help you provide micro-segmentation for both east-west and north-south traffic.

Vulnerability Prioritization

Another area where RedSeal excels is by adding context to network vulnerability management. This allows you to perform true risk-based assessments and prioritization from your scanners. RedSeal calculates vulnerability risk scores that account for not only severity and asset value but also downstream risk based on the accessibility of vulnerable downstream assets.

In many cases, RedSeal uncovers downstream assets that organizations didn’t know were connected or vulnerable. These connections provided open threat surfaces, but never showed up in alert logs or only as low-to-medium risks. So, SOC teams already overwhelmed with managing critical and high-risk alerts may never get to these hidden connections. Yet, the potential damage from threat actors exploiting these connections could be even greater than what showed up as high risk.

RedSeal shows you the complete pictures and helps you prioritize vulnerabilities so you can focus on the highest risks in your unique environment.

Play at Your Best

In the late ’90s, world chess champion Garry Kasparov faced off against Deep Blue, an IBM supercomputer, in a six-game exhibition. Kasparov won the first match. Deep Blue won the second and the next three ended in draws. When Deep Blue won the final match and secured the overall victory, Kasparov was asked to concede that the best chess player in the world is now a computer.

Kasparov responded by saying that people were asking the wrong question. The question isn’t about whether the computer is better, but rather how do you play the best game of chess? Kasparov believes he lost not because the computer was better, but because he failed to perform at his best and see all of the gaps in his play.

You can’t afford to make mistakes in your security and beat yourself. By understanding your entire network infrastructure and identifying security gaps, you can take proactive measures to perform at your best.

RedSeal is the best move for a secure environment.

Learn more about how we can help protect your multi-cloud and hybrid cloud environments. Contact RedSeal today.

Zero Trust Network Access (ZTNA): Reducing Lateral Movement

In football, scoring a touchdown means moving the ball down the field. In most cases, forward motion starts the drive to the other team’s end zone. For example, the quarterback might throw to a receiver or handoff to a running back. Network attacks often follow a similar pattern: Malicious actors go straight for their intended target by evaluating the digital field of play and picking the route most likely to succeed.

In both cases, however, there’s another option: Lateral movement. Instead of heading directly for the goal, attackers move laterally to throw defenders off guard. In football, any player with the ball can pass parallel or back down the field to another player. In lateral cyberattacks, malicious actors gain access to systems on the periphery of business networks and then move “sideways” across software and services until they reach their target.

Zero trust network access (ZTNA) offers a way to frustrate lateral attack efforts. Here’s how.

What is Zero Trust Network Access?

Zero trust network access is rooted in the notion of “need to know” — a concept that has been around for decades. The idea is simple: Access and information are only provided to those who need it to complete specific tasks or perform specific actions.

The term “zero trust” refers to the fact that trust is earned by users rather than given. For example, instead of allowing a user access because they provide the correct username and password, they’re subject to additional checks which verify their identity and earn the trust of access. The checks might include two-factor authentication, the type of device used for access, or the user’s location. Even once identity has been confirmed, further checks are conducted to ensure users have permission to access the resource or service they’re requesting.

As a result, the term “zero trust” is somewhat misleading. While catchy, it’s functionally a combination of two concepts: Least privilege and segmentation. Least privilege sees users given the minimum privilege necessary to complete assigned tasks, while segmentation focuses on creating multiple digital “compartments” within their network. That way, even if attackers gain lateral access, only a small section of the network is compromised.

Adoption of ZTNA is on the rise, with 96 percent of security decision-makers surveyed saying that zero trust is critical for organizational success. Recent predictions also suggest that by 2023 60 percent of enterprises will phase out their remote access virtual private networks (VPNs) and replace them with ZTNA frameworks.

The Fundamentals of ZTNA-Based Architecture

While the specifics of a ZTNA deployment will look different for every business, there are five fundamental functions of zero-trust network access:

1. Micro-segmentation: By defining networks into multiple zones, companies can create fine-grained and flexible security policies for each. While segments can still “talk” to each other across the network, access requirements vary based on the type of services or data they contain. This approach reduces the ability of attackers to move laterally — even if they gain network access, they’re effectively trapped in their current segment.

2. Mandatory encryption: By encrypting all communications and network traffic, it’s possible to reduce the potential for malicious interference. Since they can’t see what’s going on inside business networks simply by eavesdropping, the scope and scale of their attacks are naturally limited.

3. The principle of least privilege: By ensuring that all users have only the minimum privilege required to do their job, evaluating users’ current permission level every time they attempt to access a system, application, or device, and removing unneeded permissions when tasks are complete, companies can ensure that a compromised user or system will not lead to complete network access.

4. Total control: By continually collecting data about potential security events, user behaviors, and the current state of infrastructure components, companies can respond ASAP when security incidents occur.

5. Application-level security: By segmenting applications within larger networks, organizations can deploy application-level security controls that effectively frustrate attacker efforts to move beyond the confines of their initial compromise point.

Best Practices to Tackle Risk with ZTNA

When it comes to network security and lateral compromise, businesses and attackers are playing by the same rules, but in many cases, malicious actors are playing in a different league. To follow our football analogy, it’s as if security teams are playing at a high-school level while attackers are in the NFL. While the plays and the objectives are the same, one team has a distinct advantage in terms of size, speed, and skill.

ZTNA can help level the playing field — if it’s correctly implemented. Here are three best practices to make it work:

1. Implement Automation

Knowing what to segment and where to create segmentation boundaries requires a complete inventory of all desktops, laptops, mobile devices, servers, ports, and protocols on your network. Since this inventory is constantly changing as companies add new cloud-based services, collecting key data is no easy task. Manual processes could take six months or more, leaving IT teams with out-of-date inventories.

Automating inventory proceeds can help businesses create a functional model of their current network that is constantly updated to reflect changes, allowing teams to define effective ZTNA micro-segmentations.

2. Prioritize Proactive Response

Many businesses now prioritize the collection of “real-time” data. The problem? Seeing security event data in real-time means that incidents have already happened. By capturing complete network visibility, companies can prioritize proactive responses that limit overall risk rather than requiring remediation after the fact.

3. Adapt Access as Required

Security isn’t static. Network configurations change and evolve, meaning that ZTNA must evolve in turn. Bolstered by dynamic visibility from RedSeal, businesses can see where lateral compromise poses risk, where segmentation is working to prevent access, and where changes are necessary to improve network security.

Solving for Sideways Security

Security is a zero-sum game: If attackers win, companies lose. But the reverse is also true. If businesses can prevent malicious actors from gaining lateral access to key software or systems, they come out ahead. The challenge? One-off wins aren’t enough; businesses need consistent control over network access to reduce their total risk.

ZTNA can help reduce the sideways security risks by minimizing available privilege and maximizing network segmentation to keep attackers away from high-value data end zones and instead force functional turnovers to network security teams.

Download our Zero Trust Guide today to get started.

The House Always Wins? Top Cybersecurity Issues Facing the Casino and Gaming Industry

Head into a casino, and you should know what you’re getting into — even if you see some success at the beginning of the night, the house always wins. It’s a truism often repeated and rarely questioned but when it comes to cybersecurity, many casino and gaming organizations aren’t coming out ahead.

In this post, we’ll dive into what sets this industry apart, tackle the top cybersecurity issues facing casino and gaming companies, and offer a solid bet to help build better security infrastructure.

Doing the Math: Why Casinos and Gaming Businesses are at Greater Risk

Gaming and casino industry companies generate more than $53 billion in revenue each year. While this is a big number, it’s nothing compared to the U.S. banking industry, which reached an estimated $4847.9 billion in 2021. And yet at 1/100 the size of their financial counterparts, casinos now face rapidly-increasing attack volumes.

In 2017, for example, a network-connected fish tank was compromised by attackers and used as the jumping-off point for lateral network movement. In 2020, the Cache Creek Casino Resort in California shut down for three weeks after a cyberattack, and in 2021 six casinos in Oklahoma were hit by ransomware.

So what’s the difference? Why are casinos and gaming companies being targeted when there are bigger fish to fry? Put simply, it’s all about the connected experience. Where banks handle confidential personal information to deliver specific financial functions, casinos collect a broader cross-section of information including credit card and income information, social security numbers, and basic tombstone data to provide the best experience for customers on-site. As a result, there’s a greater variety of data for hackers to access if they manage to breach network perimeters.

Casinos and gaming companies also have a much larger and more diverse attack surface. Where banks perform specific financial functions and have locked down access to these network connections, casinos have a host of Intenet-connected devices designed to enhance the customer experience but may also empower attacks. IoT-enabled fish tanks are one example but gaming businesses also use technologies like always-connected light and temperature sensors, IoT-enabled slot machines, and large-scale WiFi networks to keep customers coming back.

In practice, this combination of connected experience and disparate technologies creates a situation that sees IT teams grow arithmetically while attacks grow geometrically. This creates a challenge: No matter how quickly companies scale up the number of staff on their teams, attackers are ahead.

Not only are malicious actors willing to share data about what works and what doesn’t when it comes to breaching casino cybersecurity, but they’re constantly trying new approaches and techniques to streamline attack efforts. IT teams, meanwhile, don’t have the time or resources to experiment.

The Top Four Cybersecurity Issues Facing Casino and Gaming Companies

When it comes to keeping customer and business data secure, gaming and casino companies face four big issues.

  1. IoT Connections
    While IoT devices such as connected thermostats, refrigerators, and even fish tanks are becoming commonplace, robust security remains rare. Factory firmware often contains critical vulnerabilities that aren’t easily detected or mitigated by IT staff, in turn creating security holes that are hard to see and even more difficult to eliminate.
  2. Ransomware Attacks
    Ransomware continues to plague companies; recent survey data found that 49 percent of executives and employees interviewed said their company had been the victim of ransomware attacks. This vector is especially worrisome for casinos and gaming companies given both the volume and variety of personal and financial data they collect and store. Successful encryption of data could shut companies down for days or weeks and leave them with a difficult choice: Pay up or risk massive market fallout.
  3. Exfiltration Issues
    Collected casino and gaming data is also valuable to attackers as a source of income through Dark Web sales. By quietly collecting and exfiltrating data, hackers can generate sustained profit in the background of casino operations while laying the groundwork for identity theft or credit card fraud.
  4. Compliance Concerns
    If casinos are breached, they may face compliance challenges on multiple fronts. For example, breached credit card data could lead to PCI DSS audits, and if businesses are found to be out of compliance, the results could range from substantial fines to a suspension of payment processing privileges. Compromised personal data, meanwhile, could put companies at risk of not meeting regulatory obligations under evolving privacy laws such as the California Consumer Protection Act (CCPA).

Betting on Better Security

Once attackers have access to casino networks, they’ve got options. They could encrypt data using ransomware and demand payment for release — which they may or may not provide, even if payment is made — or they could quietly exfiltrate customer data and then sell this information online. They could also simply keep quiet and conduct reconnaissance of new systems and technologies being deployed, then use this information to compromise key access points or sell it to the highest bidder.

The result? When it comes to protecting against cyberattacks, businesses are best served by stopping attacks before they happen rather than trying to pick up the pieces after the fact. For networks as complex and interconnected as those of casinos, achieving this goal demands complete visibility.

This starts with an identification of all devices across network architecture, from familiar systems such as servers and storage to staff mobile devices and IoT-connected technologies. By identifying both known and unknown devices, companies can get a picture of what their network actually looks like — rather than what they expect it to be.

RedSeal can help casinos achieve real-time visibility by creating a digital twin of existing networks, both to identify key assets and assess key risks by discovering the impact of network changes. For example, casinos could choose to run a port and protocol simulation to determine the risk of opening or closing specific ports — without actually making these changes on live networks. RedSeal can also help segregate key data storage buckets to mitigate the impact of attacks if systems are compromised.

Helping the House Win

Attackers are trying to tip the odds in their favor by compromising connected devices and leveraging unknown vulnerabilities. RedSeal can help the house come out ahead by delivering real-time visibility into casino and gaming networks that help IT teams make informed decisions and stay ahead of emerging cybersecurity challenges.

Ready to tip the odds in your favor? Start with RedSeal.

Zero Trust: Back to Basics

The Executive Order on Improving the Nation’s Cybersecurity in 2021 requires agencies to move towards zero trust in a meaningful way as part of modernizing infrastructure. Yet, federal agencies typically find it challenging to implement zero trust. While fine in theory, the challenge often lies in the legacy systems and on-premises networks that exist with tendrils reaching into multiple locations, including many which are unknown.

Identity management and authentication tools are an important part of network security, but before you can truly implement zero trust, you need an understanding of your entire infrastructure. Zero trust isn’t just about identity. It’s also about connectivity.

Take a quick detour here. Let’s say you’re driving a tractor-trailer hauling an oversized load. You ask Google Maps to take you the fastest route and it plots it out for you. However, you find that one of the routes is a one-lane dirt road and you can’t fit your rig. So, you go back to your mapping software and find alternate routes. Depending on how much time you have, the number of alternative pathways to your final destination is endless.

Computer security needs to think this way, too. Even if you’ve blocked the path for threat actors in one connection, how else could they get to their destination? While you may think traffic only flows one way on your network, most organizations find there are multiple pathways they never knew (or even thought) about.

To put in efficient security controls, you need to go back to basics with zero trust. That starts with understanding every device, application, and connection on your infrastructure.

Zero Trust Embodies Fundamental Best-Practice Security Concepts

Zero trust returns to the basics of good cybersecurity by assuming there is no traditional network edge. Whether it’s local, in the cloud, or any combination of hybrid resources across your infrastructure, you need a security framework that requires everyone touching your resources to be authenticated, authorized, and continuously validated.

By providing a balance between security and usability, zero trust makes it more difficult for attackers to compromise your network and access data. While providing users with authorized access to get their work done, zero-trust frameworks prevent unauthorized access and lateral movement.

By properly segmenting your network and requiring authentication at each stage, you can limit the damage even if someone does get inside your network. However, this requires a firm understanding of every device and application that are part of your infrastructure as well as your users.

Putting Zero Trust to Work

The National Institute of Standards and Technology (NIST) Risk Management Framework publication 800-207 provides the conceptual framework for zero trust that government agencies need to adopt.

The risk management framework has seven steps:

  1. Prepare: mapping and analyzing the network
  2. Categorize: assess risk at each stage and prioritize
  3. Select: determine appropriate controls
  4. Implement: deploy zero trust solutions
  5. Assess: ensure solutions and policies are operating as intended
  6. Authorize: certify systems and workflow are ready for operation
  7. Monitor: provide continuous monitoring of security posture

In NIST’s subsequent draft white paper on planning for a zero-trust architecture, it reinforces the crucial first step, which is mapping the attack surface and identifying the key parts that could be targeted by a threat actor.

Instituting zero trust security requires detailed analysis and information gathering on devices, applications, connectivity, and users. Only when you understand how data moves through your network and all the different ways it can move through your network can you implement segmentation and zero trust.

Analysts should identify options to streamline processes, consolidate tools and applications, and sunset any vulnerable devices or access points. This includes defunct user accounts and any non-compliant resources.

Use Advanced Technology to Help You Perform Network Analysis

Trying to map your network manually is nearly impossible. No matter how many people you task to help and how long you have, things will get missed. Every device, appliance, configuration, and connection has to be analyzed. Third parties and connections to outside sources need to be evaluated. At the same time you’re conducting this inventory, things are in a constant state of change which makes it even easier to miss key components.

Yet, this inventory is the foundation for implementing zero trust. If you miss something, you leave security gaps within your infrastructure.

The right network mapping software for government agencies can automate this process by going out and gathering the information for you. Net mapping analysis can calculate every possible pathway through the network, taking into account NATS messaging and load balancing. During this stage, most organizations uncover a surprising number of previously unknown pathways. Each connection point needs to be assessed for need and whether it can be closed to reduce attack surfaces.

Automated network mapping will also provide an inventory of all the gear on your network and IP space in addition to your cloud and software-defined network (SDN) assets. Zero trust requires you to identify who and what can access your network, and who should have that access.

Once you have conducted this exhaustive inventory, you can then begin to implement the zero-trust policies with confidence.

Since your network is in a constant state of evolution with new users, devices, applications, and connectivity being added, changed, or revised, you also need continuous monitoring of your network infrastructure to ensure changes remain compliant with your security policies.

Back to the Basics

The conversation about zero trust often focuses narrowly on identity. Equally important are device inventory and connectivity. The underlying goal of zero trust is allowing only specific authorized individuals to access specific things on specific devices. Before you can put in place adequate security controls, you need to know about all of the devices and all the connections.

RedSeal provides network mapping, inventory, and mission-critical security and compliance services for government agencies and businesses and is Common Criteria certified. To learn more about implementing a zero-trust framework, you need to better understand the challenges and strategies for successful zero-trust implementation.

Download our Zero Trust Guide today to get started.

Keep it Separate, Keep it Safe: How to Implement and Validate Cloud Network Segmentation

The distributed nature of cloud computing makes it a must-have for business, thanks to on-demand resource availability, network connectivity, and compute scalability.

But the cloud also introduces unique security challenges. First is a rapidly-expanding attack surface: As the number of connected third-party services powered by open-source code and APIs increases, so does the risk of compromise. According to the 2021 IBM Security X-Force Cloud Threat Landscape Report, more than 1,200 of the 2,500 known cloud vulnerabilities had been found within the proceeding 18 months. Additionally, 100 percent of penetration testing efforts by IBM X-Force teams found issues with cloud policies or passwords.

Cloud network segmentation offers a way for companies to reduce the risk of cloud threats. By dividing larger networks into smaller subnets — each of which can be managed individually — businesses can boost protection without sacrificing performance. Here’s how it works.

Why Is Cloud Network Segmentation Valuable to Network Security?

Cloud segmentation is part of larger defense-in-depth (DiD) security practices that look to lower total risk by creating multi-layered frameworks which help protect key data from compromise. DiD is built on the concept that there’s no such thing as a “perfect” security solution — since, with enough time and patience, attackers can compromise any protective process. By layering multiple security measures onto network access points or data storage locations, however, the effort required for compromise increases exponentially, in turn reducing total risk.

And by breaking larger cloud networks down into smaller subnets, the scale of necessary defense decreases, making it possible for teams to differentiate lower-risk subnets from those that need greater protection. Segmentation offers practical benefits for businesses.

Reduced Complexity

Segmenting larger cloud frameworks into smaller cloud networks allows teams to reduce the overall complexity that comes with managing cloud solutions at scale. Instead of trying to find one policy or process that works for cloud networks end-to-end — without introducing security risks to protected data or limiting users’ ease of access — teams can create purpose-built security policies for each network segment.

Increased Granular Control

Segmentation also offers more granular control over network defenses. For example, teams could choose to deploy next-generation firewall tools, such as those capable of discovering and analyzing specific user behaviors, or implement runtime application self-protection (RASP) functions on a case-by-case basis.

Improved Responsiveness

Smaller subnets additionally make it possible for IT professionals to identify and respond to security issues quickly. Here’s why: Given the geographically disparate nature of cloud services — one provider might house their servers locally, while another might be states or countries away — tracking down the root cause of detected issues becomes like finding a digital needle in a virtual haystack. While it’s possible using advanced detection tools and techniques, it could take days or weeks. Segmentation, meanwhile, allows teams to identify and respond to issues on a segment-by-segment basis quickly.

Enhanced Operations

Network segmentation also helps companies enhance operations by aligning with cloud security best practices such as zero trust. Under a zero trust model, user identity is never assumed; instead, it must be proven and verified through authentication. Segmentation makes it possible to apply zero trust where necessary — such as gaining access to network segments that store personally identifiable information (PII) or intellectual property (IP) — in turn helping streamline cloud access without introducing security risk.

How to Implement Network Segmentation

Network segmentation isn’t a new concept — companies have been leveraging physical segmentation of networks for years to reduce the impacts of a potential breach. As the name implies, this type of segmentation uses physical controls such as firewalls to create separate subnets and control traffic flows.

Cloud segmentation, meanwhile, comes with a bigger challenge: Creating network segments across digital environments that may be separated by substantial physical distance. As a result, cloud segmentation was often deemed too complex to work since the sheer amount of unique cloud services, solutions, and environments combined with the dynamic nature of cloud resources meant it was impossible to effectively portion out and protect these subnets.

With the right strategy, however, it’s possible for businesses to both segment and secure their cloud networks. Here, logical rather than physical segmentation is vital. Using either virtual local area networks (VLANs) or more in-depth network addressing schemes, IT teams can create logical subnetworks across cloud services that behave as if they’re physically separate, in turn increasing overall defense.

Worth noting? Validation of these virtual networks is critical to ensure protective measures are working as intended. In practice, this means deploying tools and technologies that make it possible to visualize access across all network environments — local or otherwise — to understand network topology and explore traffic paths. Validation also requires the identification and remediation of issues as they arise. Consider a subnet that includes multiple cloud services. If even one of these services contains vulnerabilities to threats such as Log4j, the entire subnetwork could be at risk. Regular vulnerability scanning paired with active threat identification and remediation is critical to ensure segmentation delivers effective security.

Closing the Cloud Security Gap with RedSeal

Cloud solutions offer the benefit of any time, anywhere access coupled with scalable, on-demand resources. But clouds also introduce unique security challenges around user access, data protection, and security threat monitoring.

As a result, protecting data in the cloud requires a defense-in-depth strategy that creates layers of protection rather than relying on a single service or technology to defend against evolving threats. Cloud network segmentation is one key component in this DiD strategy — by logically segmenting cloud services into smaller and more manageable networks, companies can reduce complexity, increase control and improve responsiveness.

But segmentation alone isn’t enough; enterprises also need the ability to visualize multiple micro-networks at scale, identify potential issues and quickly remediate concerns.

Ready to get started? Discover how RedSeal can help visualize, verify and validate your cloud network segmentation. Watch a Demo.

How Security Vulnerabilities Expose Thousands of Cloud Users to Attacks

Cloud computing has revolutionized data storage and access. It’s led the charge for digital transformation and allowed the increased adoption of remote work. At the same time, however, cloud computing has also increased security risks.

As networks have grown and cloud resources have become more entrenched in workflow, cloud computing has created larger potential attack surfaces. To safeguard their mission-critical data and operations, organizations need to know chief cloud cyber risks and have to combat them.

Why Cloud Users Are at Risk

Cloud platforms are multi-tenant environments. They share infrastructure and resources across thousands of customers. While a cloud provider acts to safeguard its infrastructure, that doesn’t address every cloud user’s security needs.

Cybersecurity in the cloud requires a more robust solution to prevent exposure. Instead of assuming that service providers will protect their data, customers must carefully define security controls for workloads and resources. Even if you’re working with the largest cloud service providers, new security vulnerabilities emerge every day.

For example, Microsoft says it invests about $1 billion in cybersecurity annually, but vulnerabilities still surface. Case in point: The technology giant warned thousand of cloud customers that threat actors might be able to read, change, or delete their main databases. Intruders could uncover database access keys and use them to grab administrative privileges. While fixing the problem, Microsoft also admitted it could not change the database access keys, and the fix required customers to create new ones. The burden was on customers to take action, and those that didn’t were vulnerable to cyberattacks.

What Type of Vulnerabilities Affect Cloud Customers?

Despite the security protections cloud providers employ, cloud customers must use best practices to manage their cyberattack protection.

Without a solid security plan, multiple vulnerabilities can exist, including:

1. Misconfigurations

Misconfigurations continue to be one of the biggest threats for cloud users. A few examples:

  • A breach at Prestige Software due to a misconfiguration using Amazon S3 services caused widespread data compromise. This single event exposed a decade’s worth of customer data from popular travel sites, such as Expedia, Hotels.com, and Booking.com.
  • A misconfigured firewall at Capital One put the personal data of 100 million customers at risk.

2. Access Control

Poor access control allows intruders to bypass weak authentication methods. Once inside the network, many organizations do not adequately restrict lateral movement or access to resources. For example, security vulnerabilities in Amazon Web Services (AWS) put up to 90% of S3 buckets at risk for identity compromise and ransomware. The problem? Businesses failed to remove permissions that allowed users to escalate privileges to admin status.

3. Insecure APIs

APIs require access to business data but can also provide vectors for threat actors. Organizations may have hundreds or even thousands of public APIs tied to microservices, leading to a large attack surface. Insecure APIs are cited as the cause of the infamous Equifax breach, which exposed nearly 150 million consumers’ records, along with security lapses at Geico, Facebook, Peloton, and Experian.

4. Lack of Shared Responsibility

Cloud providers manage the security of the cloud, but customers are responsible for handling the security of the data stored in the cloud. Yet, many users fail to keep up their end of this shared responsibility. According to Gartner, 99% of cloud security failures are due to customer errors.

5. Vendors or Third-Party Software

Third-party cloud components mean your networks are only as secure as your vendor’s security protocols. If they are compromised, it may provide a pathway for attackers into your network.

More than half of businesses have seen a data breach caused by a third party. That’s what happened to Audi, Volkswagen, and dozens of others. The infamous REvil ransomware group exploited a vulnerability in Kaseya, a remote monitoring platform, and used it to attack managed service providers (MSPs) to gain access to thousands of customers.

How Can Cloud Users Protect Themselves?

With the acceleration of remote workers and hybrid cloud and multicloud environments, attack surfaces have increased greatly over the past few years. At the same time, hackers have become more sophisticated in their methods.

Since most security tools only work in one environment, it can create a complex web that becomes difficult to manage.

Figuring out how to prevent cyberattacks requires a multi-pronged approach, but it starts with understanding how all of your security tools work together across on-prem, public clouds, and private clouds. You need strategies to monitor all of your networks, including ways to:

  • Interpret access controls across both cloud-native and third-party firewalls (service chaining)
  • Continuously validate and ensure security compliance
  • Manage network segmentation policies and regulations

Security teams must be able to answer these concerns:

  • What resources do we have across our cloud and on-premises environments?
  • What access is possible?
  • Are resources exposed to the public internet?
  • Do our cloud deployments meet best practices for cybersecurity?
  • Do we validate cloud network segmentation policies?

Without a comprehensive cybersecurity solution that evaluates and identifies potential risks, it will be challenging to mitigate vulnerabilities and identify the downstream impacts from security lapses. Even if you believe you have every security measure you need in place across all of your cloud resources, you need a way to visualize resources, identify potential risks, and prioritize threat mitigation.

A Comprehensive Cloud Security Posture Management Solution

Solving a problem starts with identifying it. You need a way to visualize potential vulnerabilities across your networks and cloud resources.

A Cloud Security Posture Management (CSPM) solution will identify vulnerabilities, such as misconfigurations, unprotected APIs, inadequate access controls, and flag changes to security policies. This helps you better understand exposure risks, create more robust cloud segmentation policies, and evaluate all of your cloud vulnerabilities.

Many CSPM solutions, however, only present their finding in static, tabular forms. It can be challenging to understand relationships and gain full awareness of the interconnectivity between cloud resources. Beyond just monitoring traffic, security teams also need to see how instances get to the cloud, what security points it goes through, and which ports and protocols apply.

RedSeal Classic identifies what’s on your network environments and how it’s all connected. This helps you validate security policies and prioritize potential vulnerabilities. RedSeal Classic can evaluate AWS, Azure, Google Cloud, and Oracle Cloud environments along with Layers 2, 3, 4, and 7 in your physical networks for application-based policies and endpoint information from multiple sources.

RedSeal Stratus allows users to visualize their AWS cloud and Elastic Kubernetes Service (EKS) inventory. We’re currently offering an Early Adopters program for RedSeals Stratus, our SaaS-based CSPM, including concierge onboarding service, so you can see the benefits first-hand.

To learn more about how RedSeal can help you see how your environment is connected and what’s at risk, request a demo today.

Tag Archive for: Network Security

See RedSeal Stratus in Action — Live Demo!