Cyber Insurance Isn’t Enough Anymore

The cyber insurance world has changed dramatically.

Premiums have risen significantly, and insurers are placing more limits on covered items. Industries like healthcare, retail, and government, where exposure is high, have been hit hard. Many organizations have seen huge rate increases for substantially less coverage than in the past. Others have seen their policies canceled or been unable to renew.

In many cases, insurers are offering half the coverage amounts at a higher cost. For example, some insurers that had previously issued $5 million liability policies have now reduced amounts to $1 million to $3 million while raising rates. Even with reduced coverage, some policy rates have risen by as much as 300%.

At the same time, insurers are leaving the field. Big payoffs in small risk pools can devastate profitability for insurers. Many insurers are reaching the break-even point where a single covered loss can wipe out years of profits. In fact, several major insurance companies have stopped issuing new cybersecurity insurance policies altogether.

This is in part to incidents like the recent Merck legal victory forcing a $1.4B payout due to the NotPetya’s malware attack. According to Fitch Ratings, more than 8,100 cyber insurance claims were paid out in 2021, the third straight year that claims increased by at least 100%. Payments from claims jumped 200% annually in 2019, 2020, and 2021 as well.

Claims are also being denied at higher rates. With such large amounts at stake, insurers are looking more closely at an organization’s policies and requiring proof that the organization is taking the right steps to protect itself. Companies need to be thinking about better ways to manage more of the cyber risks themselves. Cyber insurance isn’t enough anymore.

Dealing with Ransomware

At the heart of all of this drama is ransomware. The State of Ransomware 2022 report from Sophos includes some sobering statistics.

Ransomware attacks nearly doubled in 2021 vs. 2020, and ransom payments are higher as cybercriminals are demanding more money. In 2020, only 4% of organizations paid more than $1 million in ransoms. In 2021, that number jumped to 11%. The average ransomware paid by organizations in significant ransomware attacks grew by 500% last year to $812,360.

More companies are paying the ransom as well. Nearly half (46%) of companies hit by ransomware chose to pay despite FBI warnings not to do so. The FBI says paying ransoms encourages threat actors to target even more victims.

Even with cyber insurance, it can take months to fully recover from a ransomware attack and cause significant damage to a company’s reputation. Eighty-six percent (86%) of companies in the Sophos study said they lost business and revenue because of an attack. While 98% of cyber insurance claims were paid out, only four out of ten companies saw all of their costs paid.

There’s some evidence that cybercriminals are actively targeting organizations that have cyber insurance specifically because companies are more likely to pay. This has led to higher ransom demands, contributing to the cyber insurance crisis. At the same time, there’s been a significant increase in how cybercriminals are exacting payments.

Ransomware attackers are now often requiring two payments. The first is for providing the decryption key to unlock encrypted data. A demand for a separate payment is made to avoid releasing the data itself publicly. Threat actors are also hitting the same organizations more than once. When they know they’ll get paid, they often increase efforts to attack a company a second or third time until they lock down their security.

Protecting Yourself from Ransomware Attacks

Organizations must deploy strict guidelines and protocols for security and follow them to protect themselves. Even one small slip-up in following procedures can result in millions or even billions of dollars in losses and denied claims.

People, Processes, Tech, and Monitoring

The root cause of most breaches and ransomware attacks is a breakdown in processes, allowing an attack vector to be exploited. This breakdown often occurs because there is a lack of controls or adherence to these controls by the people using the network.

Whether organizations decide to pay the price for cyber insurance or not, they need to take proactive steps to ensure they have the right policies in place, have robust processes for managing control, and train their team members on how to protect organizational assets.

Organizations also need a skilled cybersecurity workforce to deploy and maintain protection along with the right tech tools.

Even with all of this in place, strong cybersecurity demands continuous monitoring and testing. Networks are rarely stable. New devices and endpoints are added constantly. New software, cloud services, and third-party solutions are deployed. With such fluidity, it’s important to continually identify potential security gaps and take proactive measures to harden your systems.

Identifying Potential Vulnerabilities

One of the first steps is understanding your entire network environment and potential vulnerabilities. For example, RedSeal’s cloud cybersecurity solution can create a real-time visualization of your network and continuously monitor your production environment and traffic. This provides a clear understanding of how data flows through your network to create a cyber risk model.

Users get a Digital Resilience Score which can be used to demonstrate their network’s security posture to cyber insurance providers.

This also helps organizations identify risk factors and compromised devices. Also, RedSeal provides a way to trace access throughout an entire network showing where an attacker can go once inside a network. This helps identify places where better segmentation is required to prevent unauthorized lateral movement.

In case an attack occurs, RedSeal accelerates incident responses by providing a more complete road map for containment.

Cyber Insurance Is Not Enough to Protect Your Bottom Line

With escalating activity and larger demands, cyber insurance is only likely to get more expensive and harder to get. Companies will also have to offer more proof about their security practices to be successful in filing claims or risk having claims denied.

For more information about how we can help you protect your network and mitigate the risks of successful cyber-attacks, contact RedSeal today.

The Unique Security Solution RedSeal Brings to Multi-Cloud and Hybrid Network Environments

One of the most significant benefits of implementing a multi-cloud strategy is the flexibility to use the right set of services to optimize opportunities and costs.

As public cloud service providers (CSPs) have evolved, they have started to excel in different areas. For example, programmers often prefer to use Azure because of its built-in development tools. However, they often want their apps to run in AWS to leverage the elastic cloud compute capability.

Adopting a multi-cloud strategy enables enterprises to benefit from this differentiation between providers and implement a “best of breed” model for the services that need to consume. They can also realize significant efficiencies, including cost-efficiency, by managing their cloud resources properly.

But multi-cloud solutions also bring their own challenges from administration to security. This can be especially challenging for organizations that don’t have deep experience and knowledge across all platforms and how they interconnect. It can sometimes seem like speaking a different language. For example, AWS has a term called VPC (virtual private cloud). Google Cloud Platform (GCP) uses that term, too but it means something different. In other cases, the reverse is true. The terminology is different but they do the same things.

Cloud provider solutions don’t always address the needs of hybrid multi-cloud deployments. Besides the terminology of AWS, Azure, GCP, Oracle’s OCI, IBM’s cloud, and others have different user interfaces. In a multi-cloud environment or hybrid environment, it can be far more difficult to secure than a single cloud.

Because of these challenges the need for a platform-independent solution that can understand all of the languages of each platform is needed to translate how your multi-cloud solutions are configured, interconnected, and help mitigate the risks.

How RedSeal Manages Multi-Cloud and Hybrid Cloud

At RedSeal, we provide the lingua franca (or bridge) for multi-cloud and on-premise networks. Security operations center (SOC) teams and DevOps get visibility into their entire network across vendors. RedSeal provides the roadmap for how the network looks and interconnects, so they can secure their entire IT infrastructure without having to be experts on every platform.

In most organizations using multi-cloud and hybrid cloud, however, network engineers and SOC teams are being asked to learn every cloud and on-prem resource and make sure they are all configured properly and secured. Many will deploy virtual cloud instances and use virtual firewalls, but as complexity rises, this becomes increasingly difficult to manage.

RedSeal is the only company that can monitor your connectivity across all of your platforms whether they are on-prem or in the cloud. This allows you to see network topology across all of your resources in one centralized platform.

Proactive Security

Proactive security is also complex. Most security offerings monitor in real-time to alert you when there’s an attack underway. That’s an important aspect of your security, but it also has a fundamental flaw. Once you recognize the problem, it’s already underway. It’s like calling 9-1-1 when you discover an emergency. Help is on the way, but the situation has already occurred.

Wouldn’t you like to know your security issues before an incident occurs?

RedSeal helps you identify potential security gaps in your network, so you can address them proactively. And, we can do it across your entire network.

Network Segmentation

Segmenting your network allows you to employ zero trust and application layer identity management to prevent lateral movement within your network. One of the most powerful things about RedSeal is that it provides the visibility you need to manage network segmentation.

It’s a simple concept, but it can also become incredibly complex — especially for larger companies.

If you’re a small business with 100 employees, segmentation may be easy. For example, you segment your CNC machine so employees don’t have admin rights to change configurations. In a mid-size or enterprise-level company, however, you can have an exponential number of connections and end-points. We’ve seen organizations with more than a million endpoints and connections that admins never even knew existed.

It’s only gotten more complex with distributed workforces, remote workers, hybrid work environments, and more third-party providers.

RedSeal can map it all and help you provide micro-segmentation for both east-west and north-south traffic.

Vulnerability Prioritization

Another area where RedSeal excels is by adding context to network vulnerability management. This allows you to perform true risk-based assessments and prioritization from your scanners. RedSeal calculates vulnerability risk scores that account for not only severity and asset value but also downstream risk based on the accessibility of vulnerable downstream assets.

In many cases, RedSeal uncovers downstream assets that organizations didn’t know were connected or vulnerable. These connections provided open threat surfaces, but never showed up in alert logs or only as low-to-medium risks. So, SOC teams already overwhelmed with managing critical and high-risk alerts may never get to these hidden connections. Yet, the potential damage from threat actors exploiting these connections could be even greater than what showed up as high risk.

RedSeal shows you the complete pictures and helps you prioritize vulnerabilities so you can focus on the highest risks in your unique environment.

Play at Your Best

In the late ’90s, world chess champion Garry Kasparov faced off against Deep Blue, an IBM supercomputer, in a six-game exhibition. Kasparov won the first match. Deep Blue won the second and the next three ended in draws. When Deep Blue won the final match and secured the overall victory, Kasparov was asked to concede that the best chess player in the world is now a computer.

Kasparov responded by saying that people were asking the wrong question. The question isn’t about whether the computer is better, but rather how do you play the best game of chess? Kasparov believes he lost not because the computer was better, but because he failed to perform at his best and see all of the gaps in his play.

You can’t afford to make mistakes in your security and beat yourself. By understanding your entire network infrastructure and identifying security gaps, you can take proactive measures to perform at your best.

RedSeal is the best move for a secure environment.

Learn more about how we can help protect your multi-cloud and hybrid cloud environments. Contact RedSeal today.

Zero Trust Network Access (ZTNA): Reducing Lateral Movement

In football, scoring a touchdown means moving the ball down the field. In most cases, forward motion starts the drive to the other team’s end zone. For example, the quarterback might throw to a receiver or handoff to a running back. Network attacks often follow a similar pattern: Malicious actors go straight for their intended target by evaluating the digital field of play and picking the route most likely to succeed.

In both cases, however, there’s another option: Lateral movement. Instead of heading directly for the goal, attackers move laterally to throw defenders off guard. In football, any player with the ball can pass parallel or back down the field to another player. In lateral cyberattacks, malicious actors gain access to systems on the periphery of business networks and then move “sideways” across software and services until they reach their target.

Zero trust network access (ZTNA) offers a way to frustrate lateral attack efforts. Here’s how.

What is Zero Trust Network Access?

Zero trust network access is rooted in the notion of “need to know” — a concept that has been around for decades. The idea is simple: Access and information are only provided to those who need it to complete specific tasks or perform specific actions.

The term “zero trust” refers to the fact that trust is earned by users rather than given. For example, instead of allowing a user access because they provide the correct username and password, they’re subject to additional checks which verify their identity and earn the trust of access. The checks might include two-factor authentication, the type of device used for access, or the user’s location. Even once identity has been confirmed, further checks are conducted to ensure users have permission to access the resource or service they’re requesting.

As a result, the term “zero trust” is somewhat misleading. While catchy, it’s functionally a combination of two concepts: Least privilege and segmentation. Least privilege sees users given the minimum privilege necessary to complete assigned tasks, while segmentation focuses on creating multiple digital “compartments” within their network. That way, even if attackers gain lateral access, only a small section of the network is compromised.

Adoption of ZTNA is on the rise, with 96 percent of security decision-makers surveyed saying that zero trust is critical for organizational success. Recent predictions also suggest that by 2023 60 percent of enterprises will phase out their remote access virtual private networks (VPNs) and replace them with ZTNA frameworks.

The Fundamentals of ZTNA-Based Architecture

While the specifics of a ZTNA deployment will look different for every business, there are five fundamental functions of zero-trust network access:

1. Micro-segmentation: By defining networks into multiple zones, companies can create fine-grained and flexible security policies for each. While segments can still “talk” to each other across the network, access requirements vary based on the type of services or data they contain. This approach reduces the ability of attackers to move laterally — even if they gain network access, they’re effectively trapped in their current segment.

2. Mandatory encryption: By encrypting all communications and network traffic, it’s possible to reduce the potential for malicious interference. Since they can’t see what’s going on inside business networks simply by eavesdropping, the scope and scale of their attacks are naturally limited.

3. The principle of least privilege: By ensuring that all users have only the minimum privilege required to do their job, evaluating users’ current permission level every time they attempt to access a system, application, or device, and removing unneeded permissions when tasks are complete, companies can ensure that a compromised user or system will not lead to complete network access.

4. Total control: By continually collecting data about potential security events, user behaviors, and the current state of infrastructure components, companies can respond ASAP when security incidents occur.

5. Application-level security: By segmenting applications within larger networks, organizations can deploy application-level security controls that effectively frustrate attacker efforts to move beyond the confines of their initial compromise point.

Best Practices to Tackle Risk with ZTNA

When it comes to network security and lateral compromise, businesses and attackers are playing by the same rules, but in many cases, malicious actors are playing in a different league. To follow our football analogy, it’s as if security teams are playing at a high-school level while attackers are in the NFL. While the plays and the objectives are the same, one team has a distinct advantage in terms of size, speed, and skill.

ZTNA can help level the playing field — if it’s correctly implemented. Here are three best practices to make it work:

1. Implement Automation

Knowing what to segment and where to create segmentation boundaries requires a complete inventory of all desktops, laptops, mobile devices, servers, ports, and protocols on your network. Since this inventory is constantly changing as companies add new cloud-based services, collecting key data is no easy task. Manual processes could take six months or more, leaving IT teams with out-of-date inventories.

Automating inventory proceeds can help businesses create a functional model of their current network that is constantly updated to reflect changes, allowing teams to define effective ZTNA micro-segmentations.

2. Prioritize Proactive Response

Many businesses now prioritize the collection of “real-time” data. The problem? Seeing security event data in real-time means that incidents have already happened. By capturing complete network visibility, companies can prioritize proactive responses that limit overall risk rather than requiring remediation after the fact.

3. Adapt Access as Required

Security isn’t static. Network configurations change and evolve, meaning that ZTNA must evolve in turn. Bolstered by dynamic visibility from RedSeal, businesses can see where lateral compromise poses risk, where segmentation is working to prevent access, and where changes are necessary to improve network security.

Solving for Sideways Security

Security is a zero-sum game: If attackers win, companies lose. But the reverse is also true. If businesses can prevent malicious actors from gaining lateral access to key software or systems, they come out ahead. The challenge? One-off wins aren’t enough; businesses need consistent control over network access to reduce their total risk.

ZTNA can help reduce the sideways security risks by minimizing available privilege and maximizing network segmentation to keep attackers away from high-value data end zones and instead force functional turnovers to network security teams.

Download our Zero Trust Guide today to get started.

The House Always Wins? Top Cybersecurity Issues Facing the Casino and Gaming Industry

Head into a casino, and you should know what you’re getting into — even if you see some success at the beginning of the night, the house always wins. It’s a truism often repeated and rarely questioned but when it comes to cybersecurity, many casino and gaming organizations aren’t coming out ahead.

In this post, we’ll dive into what sets this industry apart, tackle the top cybersecurity issues facing casino and gaming companies, and offer a solid bet to help build better security infrastructure.

Doing the Math: Why Casinos and Gaming Businesses are at Greater Risk

Gaming and casino industry companies generate more than $53 billion in revenue each year. While this is a big number, it’s nothing compared to the U.S. banking industry, which reached an estimated $4847.9 billion in 2021. And yet at 1/100 the size of their financial counterparts, casinos now face rapidly-increasing attack volumes.

In 2017, for example, a network-connected fish tank was compromised by attackers and used as the jumping-off point for lateral network movement. In 2020, the Cache Creek Casino Resort in California shut down for three weeks after a cyberattack, and in 2021 six casinos in Oklahoma were hit by ransomware.

So what’s the difference? Why are casinos and gaming companies being targeted when there are bigger fish to fry? Put simply, it’s all about the connected experience. Where banks handle confidential personal information to deliver specific financial functions, casinos collect a broader cross-section of information including credit card and income information, social security numbers, and basic tombstone data to provide the best experience for customers on-site. As a result, there’s a greater variety of data for hackers to access if they manage to breach network perimeters.

Casinos and gaming companies also have a much larger and more diverse attack surface. Where banks perform specific financial functions and have locked down access to these network connections, casinos have a host of Intenet-connected devices designed to enhance the customer experience but may also empower attacks. IoT-enabled fish tanks are one example but gaming businesses also use technologies like always-connected light and temperature sensors, IoT-enabled slot machines, and large-scale WiFi networks to keep customers coming back.

In practice, this combination of connected experience and disparate technologies creates a situation that sees IT teams grow arithmetically while attacks grow geometrically. This creates a challenge: No matter how quickly companies scale up the number of staff on their teams, attackers are ahead.

Not only are malicious actors willing to share data about what works and what doesn’t when it comes to breaching casino cybersecurity, but they’re constantly trying new approaches and techniques to streamline attack efforts. IT teams, meanwhile, don’t have the time or resources to experiment.

The Top Four Cybersecurity Issues Facing Casino and Gaming Companies

When it comes to keeping customer and business data secure, gaming and casino companies face four big issues.

  1. IoT Connections
    While IoT devices such as connected thermostats, refrigerators, and even fish tanks are becoming commonplace, robust security remains rare. Factory firmware often contains critical vulnerabilities that aren’t easily detected or mitigated by IT staff, in turn creating security holes that are hard to see and even more difficult to eliminate.
  2. Ransomware Attacks
    Ransomware continues to plague companies; recent survey data found that 49 percent of executives and employees interviewed said their company had been the victim of ransomware attacks. This vector is especially worrisome for casinos and gaming companies given both the volume and variety of personal and financial data they collect and store. Successful encryption of data could shut companies down for days or weeks and leave them with a difficult choice: Pay up or risk massive market fallout.
  3. Exfiltration Issues
    Collected casino and gaming data is also valuable to attackers as a source of income through Dark Web sales. By quietly collecting and exfiltrating data, hackers can generate sustained profit in the background of casino operations while laying the groundwork for identity theft or credit card fraud.
  4. Compliance Concerns
    If casinos are breached, they may face compliance challenges on multiple fronts. For example, breached credit card data could lead to PCI DSS audits, and if businesses are found to be out of compliance, the results could range from substantial fines to a suspension of payment processing privileges. Compromised personal data, meanwhile, could put companies at risk of not meeting regulatory obligations under evolving privacy laws such as the California Consumer Protection Act (CCPA).

Betting on Better Security

Once attackers have access to casino networks, they’ve got options. They could encrypt data using ransomware and demand payment for release — which they may or may not provide, even if payment is made — or they could quietly exfiltrate customer data and then sell this information online. They could also simply keep quiet and conduct reconnaissance of new systems and technologies being deployed, then use this information to compromise key access points or sell it to the highest bidder.

The result? When it comes to protecting against cyberattacks, businesses are best served by stopping attacks before they happen rather than trying to pick up the pieces after the fact. For networks as complex and interconnected as those of casinos, achieving this goal demands complete visibility.

This starts with an identification of all devices across network architecture, from familiar systems such as servers and storage to staff mobile devices and IoT-connected technologies. By identifying both known and unknown devices, companies can get a picture of what their network actually looks like — rather than what they expect it to be.

RedSeal can help casinos achieve real-time visibility by creating a digital twin of existing networks, both to identify key assets and assess key risks by discovering the impact of network changes. For example, casinos could choose to run a port and protocol simulation to determine the risk of opening or closing specific ports — without actually making these changes on live networks. RedSeal can also help segregate key data storage buckets to mitigate the impact of attacks if systems are compromised.

Helping the House Win

Attackers are trying to tip the odds in their favor by compromising connected devices and leveraging unknown vulnerabilities. RedSeal can help the house come out ahead by delivering real-time visibility into casino and gaming networks that help IT teams make informed decisions and stay ahead of emerging cybersecurity challenges.

Ready to tip the odds in your favor? Start with RedSeal.

HIMSS Roundup: What’s Worrying Healthcare Organizations?

Held from March 14 to 18 in Orlando, Florida, the HIMSS 22 Global Health Conference and Exhibition took aim at some of the biggest opportunities and challenges facing healthcare organizations this year.

While businesses are taking their own paths to post-pandemic operations, both the content of sessions and conversations with attendees revealed three common sources of concern: compliance operations, the Internet of Healthcare Things (IoHT), and patient access portals.

Top-of-Mind Issues in Healthcare Security

For the past few years, effective healthcare security has been inextricably tied to ransomware risk reduction and remediation. It makes sense: According to Josh Corman, head of the Cybersecurity and Infrastructure Agency (CISA) COVID-19 task force, “Hospitals’ systems were already fragile before the pandemic. Then the ransomware attacks became more varied, more aggressive, and with higher payment demands.”  As a result, ransomware has become a top priority for healthcare organizations looking to protect patient data and limit operational impacts.

Conversations with healthcare and IT professionals at HIMSS 22, however, made it clear that what worries organizations is changing. To ensure effective security, responses must evolve as well.

Top Issue #1: Compliance with Evolving Government Regulations and Security Mandates

Not surprisingly, many HIMSS attendees expressed concern about evolving government regulations and security mandates.

Attendees spoke to issues around familiar mandates such as the Health Insurance Portability and Accountability Act (HIPAA), and Payment Card Industry Data Security Standards (PCI DSS)—many were worried about their ability to understand the full scope of software and services on their networks, along with the number and nature of connections across these networks. Mergers and acquisitions (M&A) were also mentioned as potential failure points for compliance. As healthcare markets begin to stabilize, M&A volumes are increasing, in turn, leading to challenges with IT systems integration that could lead to complex and cumbersome overlaps or even more worrisome gaps in security.

When it comes to security mandates, meanwhile, many organizations understand the need for improved policies and procedures to help mitigate risk but struggle to make the shift from theory to action. Consider a recent survey which found that 74 percent of US healthcare organizations still lack comprehensive software supply chain risk management policies, despite directives such as President Biden’s May 2021 executive order on improving national cybersecurity in part through the use of zero trust frameworks, multi-factor authentication policies, and software bill of materials (SBOM) implementation.

The result is a growing concern for healthcare organizations. If regular audits conducted by regulatory bodies identify non-compliance, companies could face fines or sanctions. Consider the failure of a PCI DSS audit. If it’s determined that organizations aren’t effectively safeguarding patients’ financial data, they could lose the ability to process credit cards until the problem is addressed.

Top Issue #2: The Internet of Healthcare Things (IoHT)

IoHT adoption is on the rise. These connected devices, which include everything from patient wearables to hospital beds to lights and sensors, provide a steady stream of actionable information that can help organizations make better decisions and deliver improved care. But more devices mean more potential access points for attackers, in turn putting patient data at risk.

Effectively managing the growing IoHT landscape requires isolation and segmentation—the ability to pinpoint potential device risks and take action before attackers can exploit vulnerabilities. There’s also a growing need to understand the “blast radius” associated with IoHT if attackers are able to compromise a digitally-connected device and move laterally across healthcare networks to access patient, staff, or operational information. From data held for ransom to information exfiltrated and sold to the highest bidder, IoHT networks that lack visibility significantly increase the chance of compromise.

The Internet of Healthcare Things also introduces the challenge of incident detection. As noted by HIPAA Journal,  while the average time to detect a healthcare breach has been steadily falling over the past few years, it still takes organizations 132 days on average to discover they’ve been compromised.

Top Issue #3: Patient Access Portals

Patient access portals are a key component in the “next normal” of healthcare. Along with telehealth initiatives, these portals make it possible for patients to access medical information on-demand, anywhere, and anytime. They also allow medical staff to find key patient data, enter new information, and identify patterns in symptoms or behavior that could help inform a diagnosis.

But these portals also represent a growing security concern: unauthorized access. If the wrong person gains access to patient records, healthcare companies could find themselves exposed to both legal and regulatory risks. In part, this access risk stems from the overlap of legacy and cloud-based technologies. Many organizations still leverage outdated servers or on-premises systems while simultaneously adopting the cloud for new workloads. The result is a patchwork of overlapping and sometimes conflicting access policies, which can frustrate legitimate users and create avenues of compromise for attackers.

Addressing Today’s Pressing Healthcare Security Concerns

While meeting regulatory obligations, managing IoHT devices, and monitoring patient portals all come with unique security concerns, effectively managing all three starts with a common thread: visibility.

If healthcare organizations can’t see what’s happening on their network, they can’t make informed decisions when it comes to improving overall security. Consider IoHT. As the number of connected devices grows, so does the overall attack surface. With more devices on the network, attackers have more potential points of access to exploit, in turn increasing total risk. Complete visibility helps reduce this risk.

By deploying solutions that make it possible to view healthcare networks as a comprehensive, dynamic visualization, it’s possible for companies to validate network and device inventories, ensure critical resources aren’t exposed to public-facing connections, and prioritize detected vulnerabilities based on their network location and potential access risk. Additional tools can then be layered onto existing security frameworks to address specific concerns or eliminate critical vulnerabilities, in turn providing greater control over healthcare networks at scale.

The automation of key tasks—such as regular, internal IT audits—is also critical to improving healthcare security. Given the sheer number of devices and connections across healthcare networks, even experienced IT teams aren’t able to keep pace with changing conditions. Tools capable of automating alert capture and performing rudimentary analysis to determine if alerts are false positives or must be escalated for remediation can significantly reduce complexity while increasing overall security.

Handling Healthcare Worries

Peace of mind for healthcare organizations is hard to come by—and even harder to maintain. Evolving concerns around compliance, IoHT, and patient portals present new challenges that require new approaches to effectively monitor, manage and mitigate risks.

Thankfully, improving visibility offers a common starting point to help solve these security challenges. Armed with improved knowledge of network operations, healthcare companies are better equipped to pinpoint potential threats, take appropriate action, and reduce their total risk.

See what matters most: Get complete network visibility with RedSeal. 

Zero Trust: Back to Basics

The Executive Order on Improving the Nation’s Cybersecurity in 2021 requires agencies to move towards zero trust in a meaningful way as part of modernizing infrastructure. Yet, federal agencies typically find it challenging to implement zero trust. While fine in theory, the challenge often lies in the legacy systems and on-premises networks that exist with tendrils reaching into multiple locations, including many which are unknown.

Identity management and authentication tools are an important part of network security, but before you can truly implement zero trust, you need an understanding of your entire infrastructure. Zero trust isn’t just about identity. It’s also about connectivity.

Take a quick detour here. Let’s say you’re driving a tractor-trailer hauling an oversized load. You ask Google Maps to take you the fastest route and it plots it out for you. However, you find that one of the routes is a one-lane dirt road and you can’t fit your rig. So, you go back to your mapping software and find alternate routes. Depending on how much time you have, the number of alternative pathways to your final destination is endless.

Computer security needs to think this way, too. Even if you’ve blocked the path for threat actors in one connection, how else could they get to their destination? While you may think traffic only flows one way on your network, most organizations find there are multiple pathways they never knew (or even thought) about.

To put in efficient security controls, you need to go back to basics with zero trust. That starts with understanding every device, application, and connection on your infrastructure.

Zero Trust Embodies Fundamental Best-Practice Security Concepts

Zero trust returns to the basics of good cybersecurity by assuming there is no traditional network edge. Whether it’s local, in the cloud, or any combination of hybrid resources across your infrastructure, you need a security framework that requires everyone touching your resources to be authenticated, authorized, and continuously validated.

By providing a balance between security and usability, zero trust makes it more difficult for attackers to compromise your network and access data. While providing users with authorized access to get their work done, zero-trust frameworks prevent unauthorized access and lateral movement.

By properly segmenting your network and requiring authentication at each stage, you can limit the damage even if someone does get inside your network. However, this requires a firm understanding of every device and application that are part of your infrastructure as well as your users.

Putting Zero Trust to Work

The National Institute of Standards and Technology (NIST) Risk Management Framework publication 800-207 provides the conceptual framework for zero trust that government agencies need to adopt.

The risk management framework has seven steps:

  1. Prepare: mapping and analyzing the network
  2. Categorize: assess risk at each stage and prioritize
  3. Select: determine appropriate controls
  4. Implement: deploy zero trust solutions
  5. Assess: ensure solutions and policies are operating as intended
  6. Authorize: certify systems and workflow are ready for operation
  7. Monitor: provide continuous monitoring of security posture

In NIST’s subsequent draft white paper on planning for a zero-trust architecture, it reinforces the crucial first step, which is mapping the attack surface and identifying the key parts that could be targeted by a threat actor.

Instituting zero trust security requires detailed analysis and information gathering on devices, applications, connectivity, and users. Only when you understand how data moves through your network and all the different ways it can move through your network can you implement segmentation and zero trust.

Analysts should identify options to streamline processes, consolidate tools and applications, and sunset any vulnerable devices or access points. This includes defunct user accounts and any non-compliant resources.

Use Advanced Technology to Help You Perform Network Analysis

Trying to map your network manually is nearly impossible. No matter how many people you task to help and how long you have, things will get missed. Every device, appliance, configuration, and connection has to be analyzed. Third parties and connections to outside sources need to be evaluated. At the same time you’re conducting this inventory, things are in a constant state of change which makes it even easier to miss key components.

Yet, this inventory is the foundation for implementing zero trust. If you miss something, you leave security gaps within your infrastructure.

The right network mapping software for government agencies can automate this process by going out and gathering the information for you. Net mapping analysis can calculate every possible pathway through the network, taking into account NATS messaging and load balancing. During this stage, most organizations uncover a surprising number of previously unknown pathways. Each connection point needs to be assessed for need and whether it can be closed to reduce attack surfaces.

Automated network mapping will also provide an inventory of all the gear on your network and IP space in addition to your cloud and software-defined network (SDN) assets. Zero trust requires you to identify who and what can access your network, and who should have that access.

Once you have conducted this exhaustive inventory, you can then begin to implement the zero-trust policies with confidence.

Since your network is in a constant state of evolution with new users, devices, applications, and connectivity being added, changed, or revised, you also need continuous monitoring of your network infrastructure to ensure changes remain compliant with your security policies.

Back to the Basics

The conversation about zero trust often focuses narrowly on identity. Equally important are device inventory and connectivity. The underlying goal of zero trust is allowing only specific authorized individuals to access specific things on specific devices. Before you can put in place adequate security controls, you need to know about all of the devices and all the connections.

RedSeal provides network mapping, inventory, and mission-critical security and compliance services for government agencies and businesses and is Common Criteria certified. To learn more about implementing a zero-trust framework, you need to better understand the challenges and strategies for successful zero-trust implementation.

Download our Zero Trust Guide today to get started.

Zero Trust: Shift Back to Need to Know

Cyberattacks on government agencies are unrelenting. Attacks on government, military, and contractors rose by more than 47% in 2021 and can continue to climb. Today’s cybercriminals, threat actors, and state-sponsored hackers have become more sophisticated and continue to target government data and resources.

The recent Executive Order on Improving the Nation’s Cybersecurity directs federal agencies to take decisive action and work with the private sector to improve cybersecurity. The EO puts it bluntly:

“The United States faces persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy. The Federal Government must improve its efforts to identify, deter, protect against, detect, and respond to these actions and actors.”

The Office of Management and Budget (OMB) also issued a memorandum for agencies to improve investigative and remediation capabilities, including:

  • Centralizing access and visibility
  • More defined logging, log retention, and log management
  • Increased information sharing
  • Accelerate incident response efforts
  • More effective defense of information

In light of continued cyber-attacks, the EO requires bold and significant investments to protect and secure systems and data. This represents a cultural shift from a somewhat relaxed security environment created over time as legacy systems continued to grow and migrate legacy systems to cloud resources.

Security concerns only grew with the rapid shift to remote work. Agencies had to scramble to redefine infrastructure to accommodate remote workers, which significantly increased the attack surface.

For governmental agencies, hardening security requires a return to “need to know” using zero trust security protocols.

Zero Trust Security: What Is It?

Zero trust is a security framework that requires authentication and authorization for all users on the network. Traditionally, networks have focused on security at the edge, managing access points. However, once someone penetrated the security framework, threat actors were able to access additional network resources. As a result, many attackers were able to escalate privileges and escalate the damage they caused.

Zero trust requires users to be re-authorized at every connection to prevent unauthorized and lateral movement for users on the network. This prevents access to resources except for those with a need to know and need to access.

Current Cloud Security Measures Can Fall Short

The rising adoption of cloud services has changed the makeup of most agency infrastructures. Currently, lax cloud security measures can expose organizations to risk and harm and incremental improvements are not keeping pace.

Factors that leave openings for threat actors include:

  • Gaps in information technology (IT) expertise and challenges in hiring
  • Problems with cloud migration
  • Unsecured application programming interfaces (APIs)
  • Vulnerabilities in third-party providers
  • The complexity of security in multi-cloud and hybrid cloud environments

Zero trust is an important weapon in the battle against cyber threats, yet there has not been universal adoption. The recent Cost of a Data Breach report from the Ponemon Institute reports that only 35% of organizations employ a zero-trust framework as part of the cybersecurity protocols. This leaves agencies and businesses open for attacks.

Besides protecting networks and data, there’s also a significant financial benefit for deploying zero trust. While breaches can still occur even when zero trust is in place, the average cost to mitigate breaches for organizations with a secure zero trust framework was $1.76 million less than those without zero trust deployment.

Zero Trust and the Return to Need to Know

Intelligence agencies have employed the practice of “need to know” for years. Sensitive and confidential data is restricted to only those that have a specific need for access. In cybersecurity, zero trust includes the concept of least privilege, which only allows users access to the information and resources they need to do their job.

Contrast the zero trust with the practice of edge security which is in wide use today. Edge security is like putting a security perimeter around the outside of your home or building. Once inside the perimeter, visitors are free to move from room to room. The principle of least privilege only gives them access to the rooms—and things within each room—if they have a need to know.

With zero trust in place, visitors won’t even be able to see the room unless they are authorized for access.

Building a Zero Trust Architecture

Building a zero-trust architecture requires an understanding of your infrastructure, applications, and users. By mapping your network, you can see how devices and applications connect and pathways where security is needed to prevent unauthorized access.

A zero-trust approach requires organizations to:

  • Verify and authenticate every interaction, including user identity, location, device integrity, workload, and data classification
  • Use the principle of least privilege using just-in-time and just-enough-access (JIT/JEA) with adaptive risk policies
  • Remove implicit trust when devices or applications talk to each other along with instituting robust device access control
  • Assume breach and employ micro-segmentation to prevent lateral movement on a need-to-know basis.
  • Implement proactive threat prevention, detection, and mitigation

Mitigating Insider Threats

Zero trust also helps mitigates threats from insiders by restricting access to non-authorized resources and logging activity within the network.

When we think about data breaches, we generally think about threat actors from outside our network, but there’s also a significant threat from insiders. The 2021 Data Breach Investigations Report (DBIR) from Verizon suggests that as many as 22% of all data breaches occur from insiders.

According to the Government Accounting Office (GAO), risks to IT systems are increasing, including insider threats from witting and unwitting employees.

Managing Complex Network Environments

As organizations have grown, network environments have become incredibly complex. You need a deep understanding of all of the appliances, applications, devices, public cloud, private cloud, multi-cloud, and on-premises resources and how they are connected.

RedSeal automatically maps your infrastructure and provides a comprehensive, dynamic visualization. With RedSeal, you can identify any exposed resources in the cloud, visualize access across your network, demonstrate network compliance and configuration standards, and prioritize vulnerability for mitigation.

For more information about implementing zero trust for your organization, download the complimentary RedSeal Guide: Tips for Implementing Zero Trust. Learn about the challenges and get insights from the security professionals at RedSeal.

Ransomware Realities: Exploring the Risks to Hybrid Cloud Solutions

Hybrid cloud frameworks offer a way for companies to combine the scalability of public clouds with the security and control of their private counterparts. Pandemic pressures have accelerated hybrid adoption. According to recent survey data, 61 percent of companies currently use or pilot hybrid clouds, while 33 percent have plans to implement hybrid options in the next two years. Meanwhile, research firm Gartner points to growing cloud ubiquity across enterprise environments driven by hybrid, multi-cloud, and edge environments.

Along with increased uptake, however, is a commensurate uptick in ransomware risks. With attackers leveraging the distributed nature of remote work environments to expand their attack impact, organizations must recognize potential challenges and develop frameworks to mitigate ransomware threats effectively.

What Are the Ransomware Risks of a Hybrid Cloud Environment?

Because hybrid clouds rely on a combination of public and private solutions, overall ransomware risks are effectively double.

Consider the recent ransomware attack on payroll provider Kronos. As noted by CPO Magazine, after details of the Java diagnostic tool Log4JShell vulnerabilities were made public on December 9th, hundreds of thousands of ransomware attacks were launched worldwide. One likely victim was Kronos, with the company’s private cloud forced offline after a ransomware attack leading to weeks of remediation. Private clouds are also under threat as attacks shift from outside to inside — even a single disgruntled employee with administrative access could wreak havoc on internal clouds by simply ignoring email protection warnings or clicking through on malicious links.

Public cloud providers, including Amazon Web Services (AWS), Google Cloud, and Azure, have begun publishing articles and offering resources to help mitigate the impact of ransomware in the cloud. While large-scale public cloud services have yet reported no major ransomware attacks, it’s a matter of when, not if, these attacks occur.

In practice, successful attacks on public or private clouds can lead to severe consequences.

Systems Downtime

Ransomware attackers encrypt key files and demand payment for release. As a result, the first line of defense against increasing attack impact is shutting down affected systems to focus on remediation. Cybercriminals may also pair ransomware efforts with dedicated denial of service (DDoS) attacks which force systems offline by overloading them with traffic volumes and resource requests, even as ransomware is deployed behind network lines.

Depending on the scale and severity of the attack, it could take days or weeks for IT teams to discover the full extent of the damage, remediate the issue and bring systems back online.

Monetary Loss

As noted by Dark Reading, the average ransomware payout hit $570,000 in the first quarter of 2021, more than $250,000 more than the 2020 average of $312,000.

But initial payouts are just the start of the problem. Even if attackers return control of critical files, companies must still spend time and money identifying the vulnerabilities that made ransomware attacks possible in the first place. Then, they must spend even more money remediating these issues and testing their new security frameworks.

There’s also the potential risk of costly data loss if enterprises choose not to pay and instead look to decrypt data using available security tools — or if they pay up and attackers aren’t true to their word. If security solutions aren’t able to remove ransomware before the deadline or criminals can’t (or won’t) decrypt data, companies are left with the daunting and expensive task of building data stores back up from scratch.

Reputation Damage

Eighty-eight percent of customers won’t do business with a brand they don’t trust to handle their data. Ransomware is a red flag when it comes to trust. Even if such attacks are inevitable, customers want to know that companies took every possible precaution to prevent data loss and need the confidence that comes with clear communication about the next steps.

As a result, the loss of data due to ransomware or the inability to articulate how information recovery will occur and how data will be better defended going forward can damage organizations. After a ransomware attack, businesses often face negative impacts on reputation, reduced customer confidence, and revenue losses.

Legal Challenges

Evolving regulations such as the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), Payment Card Industry Data Security Standard (PCI DSS), and the Health Insurance Portability and Accountability Act (HIPAA) all include provisions around the safe collection, storage, and use of data. Failure to comply with these regulations can lead to fines and legal challenges if ransomware attacks are successful.

Hybrid Cloud Security Measures

While it’s not possible to eliminate ransomware in hybrid cloud environments, there are steps you can take to reduce overall risk.

1. Deploying Offline Backups

If ransomware attacks are successful, malicious code can encrypt any connected devices. These include physically attached devices such as universal serial bus (USB) sticks or hard drives along with any online, cloud-connected drives across both public and private clouds.

To help mitigate this risk, it’s worth deploying secure offline backups that are not connected to internal hosts or external data sources once backup processes are complete. Consider a private cloud backup. To reduce ransomware impact, companies are best served by establishing a data backup schedule that includes provisions for device connection, data transfer, and device disconnection once the backup is complete. By utilizing multiple offline devices that are regularly backed up and then disconnected, businesses can ensure that data remains available even if primary systems are compromised by ransomware.

2. Implementing Two-Factor Authentication

Frustrating attacker efforts to gain network access can significantly reduce the risk of ransomware. Best bet? Start with two-factor authentication (2FA). While it remains relatively easy for attackers to compromise passwords using both social engineering and brute-force attacks, implementing 2FA solutions that leverage one-time text codes or biometric data can help protect networks even if account credentials are breached. What’s more, failed 2FA checks that accompany correct account information can signal to information technology (IT) teams that attack efforts may be underway, in turn allowing them to respond and remediate threats proactively.

Even more protection is available through multi-factor authentication (MFA) strategies that combine text codes and biometrics to frustrate attackers further. It’s also vital to create strong password policies that mandate regular password changes and include rules around required password length and the use of special characters or symbols to increase overall protection. While passwords remain one of the least secure forms of data defense, they’re not going anywhere. As a result, companies must address common password problems before they lead to compromise.

3. Disabling Well-Known Ports

While attackers are constantly developing new methods and leveraging newly-discovered vulnerabilities to distribute ransomware code, they’re also creatures of habit. If specific attack vectors continue to see success, they won’t abandon them simply because something new comes along.

Case in point: Ports connected to cloud services, such as ports 137-139, 445, and 3389, are common attack targets. By disabling these ports, businesses can remove some of the most-used ransomware distribution pathways, in turn forcing attackers to take more circuitous routes if they want to compromise and infect public and private cloud systems.

4. Turning off RDP

The remote desktop protocol (RDP) allows users to connect with another computer over a network connection and provides a graphical user interface to help streamline this process. The problem? Attackers can exploit insecure RDP deployments — which typically use transmission control protocol (TCP) port 3389 and UDP port 3389 — to access user desktops and, in turn, move laterally through corporate systems until they find and encrypt critical files.

While it’s possible to protect RDP with increased security measures, the collaborative nature of cloud deployments often makes it simpler to disable RDP up-front to reduce total risk.

5. Updating to SMB 3.1.1

The Server Message Block (SMB) provides a way for client applications to read and write to files and request server resources. Originally introduced for the disk operating system (DOS) as SMB 1.0, SMB has undergone multiple iterations, with the most current version being 3.1.1. To help protect cloud services from potential ransomware attacks, businesses must upgrade to version 3.1.1 and ensure that version 1.0 is fully disabled. Failure to do so could allow hackers to reactivate version 1.0 and leverage the WannaCry vulnerability to compromise systems and install ransomware.

6. Ensuring Encryption is Used for All Sessions

Encryption helps reduce the risk of compromise by making it harder for attackers to discover and exploit critical resources. Ideally, companies should use transport layer security (TLS) v1.3 for maximum protection. Much like SMB, it’s also important to disable TLS 1.0. Why? Because if TLS v1.0 is enabled, attackers could force your server to negotiate down to TLS v1.0, which could, in turn, allow an attack.

It’s also a good idea to boost encryption efficacy by using SSHv2.0 and disabling Telnet port 80 to frustrate common attacker pathways.

7. Prohibiting Macro-Enabled Spreadsheets

Macro-enabled Excel spreadsheets have long been a source of ransomware and other malicious code. If attackers can convince users to download and open these spreadsheets, criminals are then able to install malware droppers that in turn connect with command and control (C&C) servers to download ransomware.

Recent efforts see attackers sending emails to unsuspecting users indicating they’ve been the victims of credit card fraud. Customers call in, are directed to access a malicious website, and then download a macro-enabled spreadsheet that creates a ransomware backdoor on their device. To reduce the risk of ransomware, it’s a good idea to disable the use of macro-enabled spreadsheets across both in-house Microsoft Office and Office 365 deployments.

8. Increasing Total Visibility

Attackers rely on misdirection and obfuscation to install ransomware and encrypt key files. As a result, visibility is critical for security teams. The more they can see, the better they can pinpoint potential weaknesses and identify vulnerabilities.

The challenge? Increasing hybrid cloud adoption naturally leads to reduced visibility. With companies now using multiple private and public clouds to streamline operations, the sheer number of overlapping services and solutions in use makes it difficult to manage and monitor hybrid clouds at scale. To help address this issue, businesses need cloud security tools capable of delivering comprehensive and dynamic visualization that continually interprets access controls across cloud-native and third-party firewalls to help continuously validate security compliance.

9. Recognizing the Role of Due Diligence

No matter where your data is stored, you’re ultimately responsible for its protection. This is true regardless of the service you use. While your cloud provider may offer load balancing, availability, or storage services that help protect your data, due diligence around hybrid cloud security rests with data owners.

This means that if your provider suffers a breach, you bear responsibility if key security processes weren’t followed. As a result, it’s critical to vet any cloud security services provider before signing a service level agreement (SLA) and ensure robust internal backups exist if cloud providers are compromised, or last-mile connection failures interrupt cloud access.

Controlling Ransomware Risks in Your Hybrid Cloud

Unfortunately, it’s not possible to eliminate ransomware in hybrid clouds. Instead, effective cybersecurity in the cloud needs to focus on controlling the risk that comes with distributed data environments.

This starts with the basics, such as ensuring robust encryption, turning off commonly-used ports, and updating SMB and TLS software. It also requires the use of 2FA and MFA solutions coupled with staff education to ensure they recognize the impact of insecure passwords and practices — such as downloading compromised Excel spreadsheets — cloud security as a whole.

Finally, companies must recognize that ultimate responsibility for secure handling, storage, and use of data rests with them — and that the right cloud security services provider can make all the difference when it comes to reducing risk and enhancing defense in the hybrid cloud.

Want more info on ransomware? Check out this white paper on digital resilience and ransomware protection strategies.

Keep it Separate, Keep it Safe: How to Implement and Validate Cloud Network Segmentation

The distributed nature of cloud computing makes it a must-have for business, thanks to on-demand resource availability, network connectivity, and compute scalability.

But the cloud also introduces unique security challenges. First is a rapidly-expanding attack surface: As the number of connected third-party services powered by open-source code and APIs increases, so does the risk of compromise. According to the 2021 IBM Security X-Force Cloud Threat Landscape Report, more than 1,200 of the 2,500 known cloud vulnerabilities had been found within the proceeding 18 months. Additionally, 100 percent of penetration testing efforts by IBM X-Force teams found issues with cloud policies or passwords.

Cloud network segmentation offers a way for companies to reduce the risk of cloud threats. By dividing larger networks into smaller subnets — each of which can be managed individually — businesses can boost protection without sacrificing performance. Here’s how it works.

Why Is Cloud Network Segmentation Valuable to Network Security?

Cloud segmentation is part of larger defense-in-depth (DiD) security practices that look to lower total risk by creating multi-layered frameworks which help protect key data from compromise. DiD is built on the concept that there’s no such thing as a “perfect” security solution — since, with enough time and patience, attackers can compromise any protective process. By layering multiple security measures onto network access points or data storage locations, however, the effort required for compromise increases exponentially, in turn reducing total risk.

And by breaking larger cloud networks down into smaller subnets, the scale of necessary defense decreases, making it possible for teams to differentiate lower-risk subnets from those that need greater protection. Segmentation offers practical benefits for businesses.

Reduced Complexity

Segmenting larger cloud frameworks into smaller cloud networks allows teams to reduce the overall complexity that comes with managing cloud solutions at scale. Instead of trying to find one policy or process that works for cloud networks end-to-end — without introducing security risks to protected data or limiting users’ ease of access — teams can create purpose-built security policies for each network segment.

Increased Granular Control

Segmentation also offers more granular control over network defenses. For example, teams could choose to deploy next-generation firewall tools, such as those capable of discovering and analyzing specific user behaviors, or implement runtime application self-protection (RASP) functions on a case-by-case basis.

Improved Responsiveness

Smaller subnets additionally make it possible for IT professionals to identify and respond to security issues quickly. Here’s why: Given the geographically disparate nature of cloud services — one provider might house their servers locally, while another might be states or countries away — tracking down the root cause of detected issues becomes like finding a digital needle in a virtual haystack. While it’s possible using advanced detection tools and techniques, it could take days or weeks. Segmentation, meanwhile, allows teams to identify and respond to issues on a segment-by-segment basis quickly.

Enhanced Operations

Network segmentation also helps companies enhance operations by aligning with cloud security best practices such as zero trust. Under a zero trust model, user identity is never assumed; instead, it must be proven and verified through authentication. Segmentation makes it possible to apply zero trust where necessary — such as gaining access to network segments that store personally identifiable information (PII) or intellectual property (IP) — in turn helping streamline cloud access without introducing security risk.

How to Implement Network Segmentation

Network segmentation isn’t a new concept — companies have been leveraging physical segmentation of networks for years to reduce the impacts of a potential breach. As the name implies, this type of segmentation uses physical controls such as firewalls to create separate subnets and control traffic flows.

Cloud segmentation, meanwhile, comes with a bigger challenge: Creating network segments across digital environments that may be separated by substantial physical distance. As a result, cloud segmentation was often deemed too complex to work since the sheer amount of unique cloud services, solutions, and environments combined with the dynamic nature of cloud resources meant it was impossible to effectively portion out and protect these subnets.

With the right strategy, however, it’s possible for businesses to both segment and secure their cloud networks. Here, logical rather than physical segmentation is vital. Using either virtual local area networks (VLANs) or more in-depth network addressing schemes, IT teams can create logical subnetworks across cloud services that behave as if they’re physically separate, in turn increasing overall defense.

Worth noting? Validation of these virtual networks is critical to ensure protective measures are working as intended. In practice, this means deploying tools and technologies that make it possible to visualize access across all network environments — local or otherwise — to understand network topology and explore traffic paths. Validation also requires the identification and remediation of issues as they arise. Consider a subnet that includes multiple cloud services. If even one of these services contains vulnerabilities to threats such as Log4j, the entire subnetwork could be at risk. Regular vulnerability scanning paired with active threat identification and remediation is critical to ensure segmentation delivers effective security.

Closing the Cloud Security Gap with RedSeal

Cloud solutions offer the benefit of any time, anywhere access coupled with scalable, on-demand resources. But clouds also introduce unique security challenges around user access, data protection, and security threat monitoring.

As a result, protecting data in the cloud requires a defense-in-depth strategy that creates layers of protection rather than relying on a single service or technology to defend against evolving threats. Cloud network segmentation is one key component in this DiD strategy — by logically segmenting cloud services into smaller and more manageable networks, companies can reduce complexity, increase control and improve responsiveness.

But segmentation alone isn’t enough; enterprises also need the ability to visualize multiple micro-networks at scale, identify potential issues and quickly remediate concerns.

Ready to get started? Discover how RedSeal can help visualize, verify and validate your cloud network segmentation. Watch a Demo.

Do You Need a More Intelligent and Secure Network?

By the third quarter of 2021, the number of recorded network breaches already exceeded the total breach volume of 2020 by 17 percent. What’s more, the total cost of breaches continued to rise. Data from IBM and the Ponemon Institute found that the average cost of a data breach topped $4.24 million in 2021, the highest this value has been in nearly two decades.

What does this mean? Businesses need better ways to react and respond to network security vulnerabilities. While this starts with basic security measures to mitigate the impact of issues as they occur, it also requires the creation of more intelligent networks capable of proactively detecting, identifying, and responding to threats.

Why Security Should Be a Top Priority for Every Organization

Effective security tools are now table stakes for organizations to ensure they meet evolving legislative standards around due diligence and data control. But these straightforward security measures aren’t enough to address the evolving nature of information technology (IT) environments — from rapid cloud adoption to mobile-first environments to the update of edge computing. The sheer volume and variety of corporate IT environments create organizations’ ever-changing challenges.

Increasing complexity also plays a role in security. Driven by the rapid shift to remote work and underpinned by the unstable nature of return-to-work plans, security teams now face the challenge of distributed and decentralized security environments which naturally frustrate efforts to create consistent security policies.

Consider some of the biggest data breaches of recent years:

  • Android: 100 million records exposed. In May 2021, the records of more than 100 million Android users were exposed as a result of cloud misconfigurations. Personal information, including names, email addresses, dates of birth, location data, payment information, and passwords, were available to anyone who knew where to look.
  • Facebook, 553 million records exposed. Facebook records of more than 553 million users from 106 countries were leaked online. Leaked data included phone numbers and email addresses, which according to security researcher Alon Gal, “would certainly lead to bad actors taking advantage of the data to perform social-engineering attacks [or] hacking attempts.”
  • LinkedIn, 700 million records exposed. Over 90 percent of LinkedIn members had their data compromised when it appeared for sale online. Information up for grabs included full names, phone numbers, physical addresses, email addresses, and details of linked social media accounts and user names.

Enterprises aren’t the only target for cybercriminals. As noted by Forbes, 43 percent of all cyberattack victims are small and midsize businesses (SMBs). While breaching a large enterprise can be a multimillion-dollar jackpot, SMBs are often easier targets that offer quick gains.

As a result, robust security must be a priority for every organization, regardless of size or industry.

Why Intelligence Matters for Effective Network Defense

While security is a solid starting point, it’s not enough in isolation. To handle evolving threats, companies need intelligent frameworks capable of identifying critical assets, pinpointing key vulnerabilities, and prioritizing security response. This intelligence-led approach is essential to defend IT environments now underpinned by interconnected devices, multiple cloud frameworks, and expanding edge services.

Consider that 92 percent of companies now leverage a multi-cloud approach to maximize efficiency and drive return on investment (ROI). Using multiple clouds offers a way for companies to pinpoint — and pay for — the specific solutions and services they need to achieve business aims. However, ensuring security across multiple cloud touch points rapidly becomes complex, especially as these clouds share and modify data in real-time.

What’s the best-case scenario during an attack? Compromise in one cloud hampers the efficacy of others but poses no substantive risk. And the worst case? Attacks on primary cloud services lead to successive service failures and significant downtime.

To address the challenges of expanding IT environments, companies must take an intelligence-led security approach. In practice, this means deploying tools capable of autonomous action to help detect and report IT threats, combined with robust data collection and analysis to help pinpoint root causes, rather than simply solving for symptoms.

How to Increase Your Network Intelligence and Security

While there’s no one-size-fits-all approach to increasing network intelligence and security, four functional approaches can help reduce total risk and boost your protective potential.

  1. Comprehensive Cloud Asset Identification: As cloud environments become more complex, the risk of asset blind spots that allow malicious actors to infiltrate networks without detection increases. Robust asset identification across all cloud services — from private clouds to public services such as AWS, Azure, and Google — is critical to limit overall risk.
  2. Complete Network Visualization and Access Management: Sight drives better security. If you can see what’s on your network and how it all connects, you can better identify where potential threats may occur. As a result, companies must deploy tools that offer complete visibility across all network environments and provide robust access control to ensure the right people have access to the right resources.
  3. Consistent Network Compliance: Today’s organizations must follow standards such as the Payment Card Industry Data Security Standard (PCI DSS) and cybersecurity maturity model certification, along with legislation including the General Data Protection Regulation (GPDR) and California Consumer Privacy Act (CCPA). Adhering to these standards and mandates is essential to demonstrate due diligence and protect your organization against penalties or legal action if security breaches do occur.
  4. Critical Vulnerability Prioritization: The scope and scale of new attack vectors make security triage a priority. End-to-end assessment of potential network risks based on exposure and access can help your teams prioritize vulnerabilities and design effective response frameworks.

Closing the Security Gap

No matter your business size, specialization, or industry, you need a more secure and intelligent network. Informed by increasingly complex IT environments and driven by evolving attack vectors, malicious actors are finding — and exploiting — new ways to compromise critical functions. Intelligent response is now critical to increase user confidence, and you must capture key data and protect your network.

RedSeal can help you close the security gap with an adaptable and intelligent approach to network security. From cloud security frameworks to robust network compliance solutions, access and visibility tools, and critical vulnerability prioritization, we have the technology tools and expertise to help your team build a reliable and responsive security framework.

Increase intelligence, navigate network security challenges and reduce real-life risks with RedSeal. Let’s get started.