Tag Archive for: RedSeal Stratus

HIMSS Roundup: What’s Worrying Healthcare Organizations?

Held from March 14 to 18 in Orlando, Florida, the HIMSS 22 Global Health Conference and Exhibition took aim at some of the biggest opportunities and challenges facing healthcare organizations this year.

While businesses are taking their own paths to post-pandemic operations, both the content of sessions and conversations with attendees revealed three common sources of concern: compliance operations, the Internet of Healthcare Things (IoHT), and patient access portals.

Top-of-Mind Issues in Healthcare Security

For the past few years, effective healthcare security has been inextricably tied to ransomware risk reduction and remediation. It makes sense: According to Josh Corman, head of the Cybersecurity and Infrastructure Agency (CISA) COVID-19 task force, “Hospitals’ systems were already fragile before the pandemic. Then the ransomware attacks became more varied, more aggressive, and with higher payment demands.”  As a result, ransomware has become a top priority for healthcare organizations looking to protect patient data and limit operational impacts.

Conversations with healthcare and IT professionals at HIMSS 22, however, made it clear that what worries organizations is changing. To ensure effective security, responses must evolve as well.

Top Issue #1: Compliance with Evolving Government Regulations and Security Mandates

Not surprisingly, many HIMSS attendees expressed concern about evolving government regulations and security mandates.

Attendees spoke to issues around familiar mandates such as the Health Insurance Portability and Accountability Act (HIPAA), and Payment Card Industry Data Security Standards (PCI DSS)—many were worried about their ability to understand the full scope of software and services on their networks, along with the number and nature of connections across these networks. Mergers and acquisitions (M&A) were also mentioned as potential failure points for compliance. As healthcare markets begin to stabilize, M&A volumes are increasing, in turn, leading to challenges with IT systems integration that could lead to complex and cumbersome overlaps or even more worrisome gaps in security.

When it comes to security mandates, meanwhile, many organizations understand the need for improved policies and procedures to help mitigate risk but struggle to make the shift from theory to action. Consider a recent survey which found that 74 percent of US healthcare organizations still lack comprehensive software supply chain risk management policies, despite directives such as President Biden’s May 2021 executive order on improving national cybersecurity in part through the use of zero trust frameworks, multi-factor authentication policies, and software bill of materials (SBOM) implementation.

The result is a growing concern for healthcare organizations. If regular audits conducted by regulatory bodies identify non-compliance, companies could face fines or sanctions. Consider the failure of a PCI DSS audit. If it’s determined that organizations aren’t effectively safeguarding patients’ financial data, they could lose the ability to process credit cards until the problem is addressed.

Top Issue #2: The Internet of Healthcare Things (IoHT)

IoHT adoption is on the rise. These connected devices, which include everything from patient wearables to hospital beds to lights and sensors, provide a steady stream of actionable information that can help organizations make better decisions and deliver improved care. But more devices mean more potential access points for attackers, in turn putting patient data at risk.

Effectively managing the growing IoHT landscape requires isolation and segmentation—the ability to pinpoint potential device risks and take action before attackers can exploit vulnerabilities. There’s also a growing need to understand the “blast radius” associated with IoHT if attackers are able to compromise a digitally-connected device and move laterally across healthcare networks to access patient, staff, or operational information. From data held for ransom to information exfiltrated and sold to the highest bidder, IoHT networks that lack visibility significantly increase the chance of compromise.

The Internet of Healthcare Things also introduces the challenge of incident detection. As noted by HIPAA Journal,  while the average time to detect a healthcare breach has been steadily falling over the past few years, it still takes organizations 132 days on average to discover they’ve been compromised.

Top Issue #3: Patient Access Portals

Patient access portals are a key component in the “next normal” of healthcare. Along with telehealth initiatives, these portals make it possible for patients to access medical information on-demand, anywhere, and anytime. They also allow medical staff to find key patient data, enter new information, and identify patterns in symptoms or behavior that could help inform a diagnosis.

But these portals also represent a growing security concern: unauthorized access. If the wrong person gains access to patient records, healthcare companies could find themselves exposed to both legal and regulatory risks. In part, this access risk stems from the overlap of legacy and cloud-based technologies. Many organizations still leverage outdated servers or on-premises systems while simultaneously adopting the cloud for new workloads. The result is a patchwork of overlapping and sometimes conflicting access policies, which can frustrate legitimate users and create avenues of compromise for attackers.

Addressing Today’s Pressing Healthcare Security Concerns

While meeting regulatory obligations, managing IoHT devices, and monitoring patient portals all come with unique security concerns, effectively managing all three starts with a common thread: visibility.

If healthcare organizations can’t see what’s happening on their network, they can’t make informed decisions when it comes to improving overall security. Consider IoHT. As the number of connected devices grows, so does the overall attack surface. With more devices on the network, attackers have more potential points of access to exploit, in turn increasing total risk. Complete visibility helps reduce this risk.

By deploying solutions that make it possible to view healthcare networks as a comprehensive, dynamic visualization, it’s possible for companies to validate network and device inventories, ensure critical resources aren’t exposed to public-facing connections, and prioritize detected vulnerabilities based on their network location and potential access risk. Additional tools can then be layered onto existing security frameworks to address specific concerns or eliminate critical vulnerabilities, in turn providing greater control over healthcare networks at scale.

The automation of key tasks—such as regular, internal IT audits—is also critical to improving healthcare security. Given the sheer number of devices and connections across healthcare networks, even experienced IT teams aren’t able to keep pace with changing conditions. Tools capable of automating alert capture and performing rudimentary analysis to determine if alerts are false positives or must be escalated for remediation can significantly reduce complexity while increasing overall security.

Handling Healthcare Worries

Peace of mind for healthcare organizations is hard to come by—and even harder to maintain. Evolving concerns around compliance, IoHT, and patient portals present new challenges that require new approaches to effectively monitor, manage and mitigate risks.

Thankfully, improving visibility offers a common starting point to help solve these security challenges. Armed with improved knowledge of network operations, healthcare companies are better equipped to pinpoint potential threats, take appropriate action, and reduce their total risk.

See what matters most: Get complete network visibility with RedSeal. 

Zero Trust: Back to Basics

The Executive Order on Improving the Nation’s Cybersecurity in 2021 requires agencies to move towards zero trust in a meaningful way as part of modernizing infrastructure. Yet, federal agencies typically find it challenging to implement zero trust. While fine in theory, the challenge often lies in the legacy systems and on-premises networks that exist with tendrils reaching into multiple locations, including many which are unknown.

Identity management and authentication tools are an important part of network security, but before you can truly implement zero trust, you need an understanding of your entire infrastructure. Zero trust isn’t just about identity. It’s also about connectivity.

Take a quick detour here. Let’s say you’re driving a tractor-trailer hauling an oversized load. You ask Google Maps to take you the fastest route and it plots it out for you. However, you find that one of the routes is a one-lane dirt road and you can’t fit your rig. So, you go back to your mapping software and find alternate routes. Depending on how much time you have, the number of alternative pathways to your final destination is endless.

Computer security needs to think this way, too. Even if you’ve blocked the path for threat actors in one connection, how else could they get to their destination? While you may think traffic only flows one way on your network, most organizations find there are multiple pathways they never knew (or even thought) about.

To put in efficient security controls, you need to go back to basics with zero trust. That starts with understanding every device, application, and connection on your infrastructure.

Zero Trust Embodies Fundamental Best-Practice Security Concepts

Zero trust returns to the basics of good cybersecurity by assuming there is no traditional network edge. Whether it’s local, in the cloud, or any combination of hybrid resources across your infrastructure, you need a security framework that requires everyone touching your resources to be authenticated, authorized, and continuously validated.

By providing a balance between security and usability, zero trust makes it more difficult for attackers to compromise your network and access data. While providing users with authorized access to get their work done, zero-trust frameworks prevent unauthorized access and lateral movement.

By properly segmenting your network and requiring authentication at each stage, you can limit the damage even if someone does get inside your network. However, this requires a firm understanding of every device and application that are part of your infrastructure as well as your users.

Putting Zero Trust to Work

The National Institute of Standards and Technology (NIST) Risk Management Framework publication 800-207 provides the conceptual framework for zero trust that government agencies need to adopt.

The risk management framework has seven steps:

  1. Prepare: mapping and analyzing the network
  2. Categorize: assess risk at each stage and prioritize
  3. Select: determine appropriate controls
  4. Implement: deploy zero trust solutions
  5. Assess: ensure solutions and policies are operating as intended
  6. Authorize: certify systems and workflow are ready for operation
  7. Monitor: provide continuous monitoring of security posture

In NIST’s subsequent draft white paper on planning for a zero-trust architecture, it reinforces the crucial first step, which is mapping the attack surface and identifying the key parts that could be targeted by a threat actor.

Instituting zero trust security requires detailed analysis and information gathering on devices, applications, connectivity, and users. Only when you understand how data moves through your network and all the different ways it can move through your network can you implement segmentation and zero trust.

Analysts should identify options to streamline processes, consolidate tools and applications, and sunset any vulnerable devices or access points. This includes defunct user accounts and any non-compliant resources.

Use Advanced Technology to Help You Perform Network Analysis

Trying to map your network manually is nearly impossible. No matter how many people you task to help and how long you have, things will get missed. Every device, appliance, configuration, and connection has to be analyzed. Third parties and connections to outside sources need to be evaluated. At the same time you’re conducting this inventory, things are in a constant state of change which makes it even easier to miss key components.

Yet, this inventory is the foundation for implementing zero trust. If you miss something, you leave security gaps within your infrastructure.

The right network mapping software for government agencies can automate this process by going out and gathering the information for you. Net mapping analysis can calculate every possible pathway through the network, taking into account NATS messaging and load balancing. During this stage, most organizations uncover a surprising number of previously unknown pathways. Each connection point needs to be assessed for need and whether it can be closed to reduce attack surfaces.

Automated network mapping will also provide an inventory of all the gear on your network and IP space in addition to your cloud and software-defined network (SDN) assets. Zero trust requires you to identify who and what can access your network, and who should have that access.

Once you have conducted this exhaustive inventory, you can then begin to implement the zero-trust policies with confidence.

Since your network is in a constant state of evolution with new users, devices, applications, and connectivity being added, changed, or revised, you also need continuous monitoring of your network infrastructure to ensure changes remain compliant with your security policies.

Back to the Basics

The conversation about zero trust often focuses narrowly on identity. Equally important are device inventory and connectivity. The underlying goal of zero trust is allowing only specific authorized individuals to access specific things on specific devices. Before you can put in place adequate security controls, you need to know about all of the devices and all the connections.

RedSeal provides network mapping, inventory, and mission-critical security and compliance services for government agencies and businesses and is Common Criteria certified. To learn more about implementing a zero-trust framework, you need to better understand the challenges and strategies for successful zero-trust implementation.

Download our Zero Trust Guide today to get started.

Zero Trust: Shift Back to Need to Know

Cyberattacks on government agencies are unrelenting. Attacks on government, military, and contractors rose by more than 47% in 2021 and can continue to climb. Today’s cybercriminals, threat actors, and state-sponsored hackers have become more sophisticated and continue to target government data and resources.

The recent Executive Order on Improving the Nation’s Cybersecurity directs federal agencies to take decisive action and work with the private sector to improve cybersecurity. The EO puts it bluntly:

“The United States faces persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy. The Federal Government must improve its efforts to identify, deter, protect against, detect, and respond to these actions and actors.”

The Office of Management and Budget (OMB) also issued a memorandum for agencies to improve investigative and remediation capabilities, including:

  • Centralizing access and visibility
  • More defined logging, log retention, and log management
  • Increased information sharing
  • Accelerate incident response efforts
  • More effective defense of information

In light of continued cyber-attacks, the EO requires bold and significant investments to protect and secure systems and data. This represents a cultural shift from a somewhat relaxed security environment created over time as legacy systems continued to grow and migrate legacy systems to cloud resources.

Security concerns only grew with the rapid shift to remote work. Agencies had to scramble to redefine infrastructure to accommodate remote workers, which significantly increased the attack surface.

For governmental agencies, hardening security requires a return to “need to know” using zero trust security protocols.

Zero Trust Security: What Is It?

Zero trust is a security framework that requires authentication and authorization for all users on the network. Traditionally, networks have focused on security at the edge, managing access points. However, once someone penetrated the security framework, threat actors were able to access additional network resources. As a result, many attackers were able to escalate privileges and escalate the damage they caused.

Zero trust requires users to be re-authorized at every connection to prevent unauthorized and lateral movement for users on the network. This prevents access to resources except for those with a need to know and need to access.

Current Cloud Security Measures Can Fall Short

The rising adoption of cloud services has changed the makeup of most agency infrastructures. Currently, lax cloud security measures can expose organizations to risk and harm and incremental improvements are not keeping pace.

Factors that leave openings for threat actors include:

  • Gaps in information technology (IT) expertise and challenges in hiring
  • Problems with cloud migration
  • Unsecured application programming interfaces (APIs)
  • Vulnerabilities in third-party providers
  • The complexity of security in multi-cloud and hybrid cloud environments

Zero trust is an important weapon in the battle against cyber threats, yet there has not been universal adoption. The recent Cost of a Data Breach report from the Ponemon Institute reports that only 35% of organizations employ a zero-trust framework as part of the cybersecurity protocols. This leaves agencies and businesses open for attacks.

Besides protecting networks and data, there’s also a significant financial benefit for deploying zero trust. While breaches can still occur even when zero trust is in place, the average cost to mitigate breaches for organizations with a secure zero trust framework was $1.76 million less than those without zero trust deployment.

Zero Trust and the Return to Need to Know

Intelligence agencies have employed the practice of “need to know” for years. Sensitive and confidential data is restricted to only those that have a specific need for access. In cybersecurity, zero trust includes the concept of least privilege, which only allows users access to the information and resources they need to do their job.

Contrast the zero trust with the practice of edge security which is in wide use today. Edge security is like putting a security perimeter around the outside of your home or building. Once inside the perimeter, visitors are free to move from room to room. The principle of least privilege only gives them access to the rooms—and things within each room—if they have a need to know.

With zero trust in place, visitors won’t even be able to see the room unless they are authorized for access.

Building a Zero Trust Architecture

Building a zero-trust architecture requires an understanding of your infrastructure, applications, and users. By mapping your network, you can see how devices and applications connect and pathways where security is needed to prevent unauthorized access.

A zero-trust approach requires organizations to:

  • Verify and authenticate every interaction, including user identity, location, device integrity, workload, and data classification
  • Use the principle of least privilege using just-in-time and just-enough-access (JIT/JEA) with adaptive risk policies
  • Remove implicit trust when devices or applications talk to each other along with instituting robust device access control
  • Assume breach and employ micro-segmentation to prevent lateral movement on a need-to-know basis.
  • Implement proactive threat prevention, detection, and mitigation

Mitigating Insider Threats

Zero trust also helps mitigates threats from insiders by restricting access to non-authorized resources and logging activity within the network.

When we think about data breaches, we generally think about threat actors from outside our network, but there’s also a significant threat from insiders. The 2021 Data Breach Investigations Report (DBIR) from Verizon suggests that as many as 22% of all data breaches occur from insiders.

According to the Government Accounting Office (GAO), risks to IT systems are increasing, including insider threats from witting and unwitting employees.

Managing Complex Network Environments

As organizations have grown, network environments have become incredibly complex. You need a deep understanding of all of the appliances, applications, devices, public cloud, private cloud, multi-cloud, and on-premises resources and how they are connected.

RedSeal automatically maps your infrastructure and provides a comprehensive, dynamic visualization. With RedSeal, you can identify any exposed resources in the cloud, visualize access across your network, demonstrate network compliance and configuration standards, and prioritize vulnerability for mitigation.

For more information about implementing zero trust for your organization, download the complimentary RedSeal Guide: Tips for Implementing Zero Trust. Learn about the challenges and get insights from the security professionals at RedSeal.

Ransomware Realities: Exploring the Risks to Hybrid Cloud Solutions

Hybrid cloud frameworks offer a way for companies to combine the scalability of public clouds with the security and control of their private counterparts. Pandemic pressures have accelerated hybrid adoption. According to recent survey data, 61 percent of companies currently use or pilot hybrid clouds, while 33 percent have plans to implement hybrid options in the next two years. Meanwhile, research firm Gartner points to growing cloud ubiquity across enterprise environments driven by hybrid, multi-cloud, and edge environments.

Along with increased uptake, however, is a commensurate uptick in ransomware risks. With attackers leveraging the distributed nature of remote work environments to expand their attack impact, organizations must recognize potential challenges and develop frameworks to mitigate ransomware threats effectively.

What Are the Ransomware Risks of a Hybrid Cloud Environment?

Because hybrid clouds rely on a combination of public and private solutions, overall ransomware risks are effectively double.

Consider the recent ransomware attack on payroll provider Kronos. As noted by CPO Magazine, after details of the Java diagnostic tool Log4JShell vulnerabilities were made public on December 9th, hundreds of thousands of ransomware attacks were launched worldwide. One likely victim was Kronos, with the company’s private cloud forced offline after a ransomware attack leading to weeks of remediation. Private clouds are also under threat as attacks shift from outside to inside — even a single disgruntled employee with administrative access could wreak havoc on internal clouds by simply ignoring email protection warnings or clicking through on malicious links.

Public cloud providers, including Amazon Web Services (AWS), Google Cloud, and Azure, have begun publishing articles and offering resources to help mitigate the impact of ransomware in the cloud. While large-scale public cloud services have yet reported no major ransomware attacks, it’s a matter of when, not if, these attacks occur.

In practice, successful attacks on public or private clouds can lead to severe consequences.

Systems Downtime

Ransomware attackers encrypt key files and demand payment for release. As a result, the first line of defense against increasing attack impact is shutting down affected systems to focus on remediation. Cybercriminals may also pair ransomware efforts with dedicated denial of service (DDoS) attacks which force systems offline by overloading them with traffic volumes and resource requests, even as ransomware is deployed behind network lines.

Depending on the scale and severity of the attack, it could take days or weeks for IT teams to discover the full extent of the damage, remediate the issue and bring systems back online.

Monetary Loss

As noted by Dark Reading, the average ransomware payout hit $570,000 in the first quarter of 2021, more than $250,000 more than the 2020 average of $312,000.

But initial payouts are just the start of the problem. Even if attackers return control of critical files, companies must still spend time and money identifying the vulnerabilities that made ransomware attacks possible in the first place. Then, they must spend even more money remediating these issues and testing their new security frameworks.

There’s also the potential risk of costly data loss if enterprises choose not to pay and instead look to decrypt data using available security tools — or if they pay up and attackers aren’t true to their word. If security solutions aren’t able to remove ransomware before the deadline or criminals can’t (or won’t) decrypt data, companies are left with the daunting and expensive task of building data stores back up from scratch.

Reputation Damage

Eighty-eight percent of customers won’t do business with a brand they don’t trust to handle their data. Ransomware is a red flag when it comes to trust. Even if such attacks are inevitable, customers want to know that companies took every possible precaution to prevent data loss and need the confidence that comes with clear communication about the next steps.

As a result, the loss of data due to ransomware or the inability to articulate how information recovery will occur and how data will be better defended going forward can damage organizations. After a ransomware attack, businesses often face negative impacts on reputation, reduced customer confidence, and revenue losses.

Legal Challenges

Evolving regulations such as the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), Payment Card Industry Data Security Standard (PCI DSS), and the Health Insurance Portability and Accountability Act (HIPAA) all include provisions around the safe collection, storage, and use of data. Failure to comply with these regulations can lead to fines and legal challenges if ransomware attacks are successful.

Hybrid Cloud Security Measures

While it’s not possible to eliminate ransomware in hybrid cloud environments, there are steps you can take to reduce overall risk.

1. Deploying Offline Backups

If ransomware attacks are successful, malicious code can encrypt any connected devices. These include physically attached devices such as universal serial bus (USB) sticks or hard drives along with any online, cloud-connected drives across both public and private clouds.

To help mitigate this risk, it’s worth deploying secure offline backups that are not connected to internal hosts or external data sources once backup processes are complete. Consider a private cloud backup. To reduce ransomware impact, companies are best served by establishing a data backup schedule that includes provisions for device connection, data transfer, and device disconnection once the backup is complete. By utilizing multiple offline devices that are regularly backed up and then disconnected, businesses can ensure that data remains available even if primary systems are compromised by ransomware.

2. Implementing Two-Factor Authentication

Frustrating attacker efforts to gain network access can significantly reduce the risk of ransomware. Best bet? Start with two-factor authentication (2FA). While it remains relatively easy for attackers to compromise passwords using both social engineering and brute-force attacks, implementing 2FA solutions that leverage one-time text codes or biometric data can help protect networks even if account credentials are breached. What’s more, failed 2FA checks that accompany correct account information can signal to information technology (IT) teams that attack efforts may be underway, in turn allowing them to respond and remediate threats proactively.

Even more protection is available through multi-factor authentication (MFA) strategies that combine text codes and biometrics to frustrate attackers further. It’s also vital to create strong password policies that mandate regular password changes and include rules around required password length and the use of special characters or symbols to increase overall protection. While passwords remain one of the least secure forms of data defense, they’re not going anywhere. As a result, companies must address common password problems before they lead to compromise.

3. Disabling Well-Known Ports

While attackers are constantly developing new methods and leveraging newly-discovered vulnerabilities to distribute ransomware code, they’re also creatures of habit. If specific attack vectors continue to see success, they won’t abandon them simply because something new comes along.

Case in point: Ports connected to cloud services, such as ports 137-139, 445, and 3389, are common attack targets. By disabling these ports, businesses can remove some of the most-used ransomware distribution pathways, in turn forcing attackers to take more circuitous routes if they want to compromise and infect public and private cloud systems.

4. Turning off RDP

The remote desktop protocol (RDP) allows users to connect with another computer over a network connection and provides a graphical user interface to help streamline this process. The problem? Attackers can exploit insecure RDP deployments — which typically use transmission control protocol (TCP) port 3389 and UDP port 3389 — to access user desktops and, in turn, move laterally through corporate systems until they find and encrypt critical files.

While it’s possible to protect RDP with increased security measures, the collaborative nature of cloud deployments often makes it simpler to disable RDP up-front to reduce total risk.

5. Updating to SMB 3.1.1

The Server Message Block (SMB) provides a way for client applications to read and write to files and request server resources. Originally introduced for the disk operating system (DOS) as SMB 1.0, SMB has undergone multiple iterations, with the most current version being 3.1.1. To help protect cloud services from potential ransomware attacks, businesses must upgrade to version 3.1.1 and ensure that version 1.0 is fully disabled. Failure to do so could allow hackers to reactivate version 1.0 and leverage the WannaCry vulnerability to compromise systems and install ransomware.

6. Ensuring Encryption is Used for All Sessions

Encryption helps reduce the risk of compromise by making it harder for attackers to discover and exploit critical resources. Ideally, companies should use transport layer security (TLS) v1.3 for maximum protection. Much like SMB, it’s also important to disable TLS 1.0. Why? Because if TLS v1.0 is enabled, attackers could force your server to negotiate down to TLS v1.0, which could, in turn, allow an attack.

It’s also a good idea to boost encryption efficacy by using SSHv2.0 and disabling Telnet port 80 to frustrate common attacker pathways.

7. Prohibiting Macro-Enabled Spreadsheets

Macro-enabled Excel spreadsheets have long been a source of ransomware and other malicious code. If attackers can convince users to download and open these spreadsheets, criminals are then able to install malware droppers that in turn connect with command and control (C&C) servers to download ransomware.

Recent efforts see attackers sending emails to unsuspecting users indicating they’ve been the victims of credit card fraud. Customers call in, are directed to access a malicious website, and then download a macro-enabled spreadsheet that creates a ransomware backdoor on their device. To reduce the risk of ransomware, it’s a good idea to disable the use of macro-enabled spreadsheets across both in-house Microsoft Office and Office 365 deployments.

8. Increasing Total Visibility

Attackers rely on misdirection and obfuscation to install ransomware and encrypt key files. As a result, visibility is critical for security teams. The more they can see, the better they can pinpoint potential weaknesses and identify vulnerabilities.

The challenge? Increasing hybrid cloud adoption naturally leads to reduced visibility. With companies now using multiple private and public clouds to streamline operations, the sheer number of overlapping services and solutions in use makes it difficult to manage and monitor hybrid clouds at scale. To help address this issue, businesses need cloud security tools capable of delivering comprehensive and dynamic visualization that continually interprets access controls across cloud-native and third-party firewalls to help continuously validate security compliance.

9. Recognizing the Role of Due Diligence

No matter where your data is stored, you’re ultimately responsible for its protection. This is true regardless of the service you use. While your cloud provider may offer load balancing, availability, or storage services that help protect your data, due diligence around hybrid cloud security rests with data owners.

This means that if your provider suffers a breach, you bear responsibility if key security processes weren’t followed. As a result, it’s critical to vet any cloud security services provider before signing a service level agreement (SLA) and ensure robust internal backups exist if cloud providers are compromised, or last-mile connection failures interrupt cloud access.

Controlling Ransomware Risks in Your Hybrid Cloud

Unfortunately, it’s not possible to eliminate ransomware in hybrid clouds. Instead, effective cybersecurity in the cloud needs to focus on controlling the risk that comes with distributed data environments.

This starts with the basics, such as ensuring robust encryption, turning off commonly-used ports, and updating SMB and TLS software. It also requires the use of 2FA and MFA solutions coupled with staff education to ensure they recognize the impact of insecure passwords and practices — such as downloading compromised Excel spreadsheets — cloud security as a whole.

Finally, companies must recognize that ultimate responsibility for secure handling, storage, and use of data rests with them — and that the right cloud security services provider can make all the difference when it comes to reducing risk and enhancing defense in the hybrid cloud.

Want more info on ransomware? Check out this white paper on digital resilience and ransomware protection strategies.

Keep it Separate, Keep it Safe: How to Implement and Validate Cloud Network Segmentation

The distributed nature of cloud computing makes it a must-have for business, thanks to on-demand resource availability, network connectivity, and compute scalability.

But the cloud also introduces unique security challenges. First is a rapidly-expanding attack surface: As the number of connected third-party services powered by open-source code and APIs increases, so does the risk of compromise. According to the 2021 IBM Security X-Force Cloud Threat Landscape Report, more than 1,200 of the 2,500 known cloud vulnerabilities had been found within the proceeding 18 months. Additionally, 100 percent of penetration testing efforts by IBM X-Force teams found issues with cloud policies or passwords.

Cloud network segmentation offers a way for companies to reduce the risk of cloud threats. By dividing larger networks into smaller subnets — each of which can be managed individually — businesses can boost protection without sacrificing performance. Here’s how it works.

Why Is Cloud Network Segmentation Valuable to Network Security?

Cloud segmentation is part of larger defense-in-depth (DiD) security practices that look to lower total risk by creating multi-layered frameworks which help protect key data from compromise. DiD is built on the concept that there’s no such thing as a “perfect” security solution — since, with enough time and patience, attackers can compromise any protective process. By layering multiple security measures onto network access points or data storage locations, however, the effort required for compromise increases exponentially, in turn reducing total risk.

And by breaking larger cloud networks down into smaller subnets, the scale of necessary defense decreases, making it possible for teams to differentiate lower-risk subnets from those that need greater protection. Segmentation offers practical benefits for businesses.

Reduced Complexity

Segmenting larger cloud frameworks into smaller cloud networks allows teams to reduce the overall complexity that comes with managing cloud solutions at scale. Instead of trying to find one policy or process that works for cloud networks end-to-end — without introducing security risks to protected data or limiting users’ ease of access — teams can create purpose-built security policies for each network segment.

Increased Granular Control

Segmentation also offers more granular control over network defenses. For example, teams could choose to deploy next-generation firewall tools, such as those capable of discovering and analyzing specific user behaviors, or implement runtime application self-protection (RASP) functions on a case-by-case basis.

Improved Responsiveness

Smaller subnets additionally make it possible for IT professionals to identify and respond to security issues quickly. Here’s why: Given the geographically disparate nature of cloud services — one provider might house their servers locally, while another might be states or countries away — tracking down the root cause of detected issues becomes like finding a digital needle in a virtual haystack. While it’s possible using advanced detection tools and techniques, it could take days or weeks. Segmentation, meanwhile, allows teams to identify and respond to issues on a segment-by-segment basis quickly.

Enhanced Operations

Network segmentation also helps companies enhance operations by aligning with cloud security best practices such as zero trust. Under a zero trust model, user identity is never assumed; instead, it must be proven and verified through authentication. Segmentation makes it possible to apply zero trust where necessary — such as gaining access to network segments that store personally identifiable information (PII) or intellectual property (IP) — in turn helping streamline cloud access without introducing security risk.

How to Implement Network Segmentation

Network segmentation isn’t a new concept — companies have been leveraging physical segmentation of networks for years to reduce the impacts of a potential breach. As the name implies, this type of segmentation uses physical controls such as firewalls to create separate subnets and control traffic flows.

Cloud segmentation, meanwhile, comes with a bigger challenge: Creating network segments across digital environments that may be separated by substantial physical distance. As a result, cloud segmentation was often deemed too complex to work since the sheer amount of unique cloud services, solutions, and environments combined with the dynamic nature of cloud resources meant it was impossible to effectively portion out and protect these subnets.

With the right strategy, however, it’s possible for businesses to both segment and secure their cloud networks. Here, logical rather than physical segmentation is vital. Using either virtual local area networks (VLANs) or more in-depth network addressing schemes, IT teams can create logical subnetworks across cloud services that behave as if they’re physically separate, in turn increasing overall defense.

Worth noting? Validation of these virtual networks is critical to ensure protective measures are working as intended. In practice, this means deploying tools and technologies that make it possible to visualize access across all network environments — local or otherwise — to understand network topology and explore traffic paths. Validation also requires the identification and remediation of issues as they arise. Consider a subnet that includes multiple cloud services. If even one of these services contains vulnerabilities to threats such as Log4j, the entire subnetwork could be at risk. Regular vulnerability scanning paired with active threat identification and remediation is critical to ensure segmentation delivers effective security.

Closing the Cloud Security Gap with RedSeal

Cloud solutions offer the benefit of any time, anywhere access coupled with scalable, on-demand resources. But clouds also introduce unique security challenges around user access, data protection, and security threat monitoring.

As a result, protecting data in the cloud requires a defense-in-depth strategy that creates layers of protection rather than relying on a single service or technology to defend against evolving threats. Cloud network segmentation is one key component in this DiD strategy — by logically segmenting cloud services into smaller and more manageable networks, companies can reduce complexity, increase control and improve responsiveness.

But segmentation alone isn’t enough; enterprises also need the ability to visualize multiple micro-networks at scale, identify potential issues and quickly remediate concerns.

Ready to get started? Discover how RedSeal can help visualize, verify and validate your cloud network segmentation. Watch a Demo.

Do You Need a More Intelligent and Secure Network?

By the third quarter of 2021, the number of recorded network breaches already exceeded the total breach volume of 2020 by 17 percent. What’s more, the total cost of breaches continued to rise. Data from IBM and the Ponemon Institute found that the average cost of a data breach topped $4.24 million in 2021, the highest this value has been in nearly two decades.

What does this mean? Businesses need better ways to react and respond to network security vulnerabilities. While this starts with basic security measures to mitigate the impact of issues as they occur, it also requires the creation of more intelligent networks capable of proactively detecting, identifying, and responding to threats.

Why Security Should Be a Top Priority for Every Organization

Effective security tools are now table stakes for organizations to ensure they meet evolving legislative standards around due diligence and data control. But these straightforward security measures aren’t enough to address the evolving nature of information technology (IT) environments — from rapid cloud adoption to mobile-first environments to the update of edge computing. The sheer volume and variety of corporate IT environments create organizations’ ever-changing challenges.

Increasing complexity also plays a role in security. Driven by the rapid shift to remote work and underpinned by the unstable nature of return-to-work plans, security teams now face the challenge of distributed and decentralized security environments which naturally frustrate efforts to create consistent security policies.

Consider some of the biggest data breaches of recent years:

  • Android: 100 million records exposed. In May 2021, the records of more than 100 million Android users were exposed as a result of cloud misconfigurations. Personal information, including names, email addresses, dates of birth, location data, payment information, and passwords, were available to anyone who knew where to look.
  • Facebook, 553 million records exposed. Facebook records of more than 553 million users from 106 countries were leaked online. Leaked data included phone numbers and email addresses, which according to security researcher Alon Gal, “would certainly lead to bad actors taking advantage of the data to perform social-engineering attacks [or] hacking attempts.”
  • LinkedIn, 700 million records exposed. Over 90 percent of LinkedIn members had their data compromised when it appeared for sale online. Information up for grabs included full names, phone numbers, physical addresses, email addresses, and details of linked social media accounts and user names.

Enterprises aren’t the only target for cybercriminals. As noted by Forbes, 43 percent of all cyberattack victims are small and midsize businesses (SMBs). While breaching a large enterprise can be a multimillion-dollar jackpot, SMBs are often easier targets that offer quick gains.

As a result, robust security must be a priority for every organization, regardless of size or industry.

Why Intelligence Matters for Effective Network Defense

While security is a solid starting point, it’s not enough in isolation. To handle evolving threats, companies need intelligent frameworks capable of identifying critical assets, pinpointing key vulnerabilities, and prioritizing security response. This intelligence-led approach is essential to defend IT environments now underpinned by interconnected devices, multiple cloud frameworks, and expanding edge services.

Consider that 92 percent of companies now leverage a multi-cloud approach to maximize efficiency and drive return on investment (ROI). Using multiple clouds offers a way for companies to pinpoint — and pay for — the specific solutions and services they need to achieve business aims. However, ensuring security across multiple cloud touch points rapidly becomes complex, especially as these clouds share and modify data in real-time.

What’s the best-case scenario during an attack? Compromise in one cloud hampers the efficacy of others but poses no substantive risk. And the worst case? Attacks on primary cloud services lead to successive service failures and significant downtime.

To address the challenges of expanding IT environments, companies must take an intelligence-led security approach. In practice, this means deploying tools capable of autonomous action to help detect and report IT threats, combined with robust data collection and analysis to help pinpoint root causes, rather than simply solving for symptoms.

How to Increase Your Network Intelligence and Security

While there’s no one-size-fits-all approach to increasing network intelligence and security, four functional approaches can help reduce total risk and boost your protective potential.

  1. Comprehensive Cloud Asset Identification: As cloud environments become more complex, the risk of asset blind spots that allow malicious actors to infiltrate networks without detection increases. Robust asset identification across all cloud services — from private clouds to public services such as AWS, Azure, and Google — is critical to limit overall risk.
  2. Complete Network Visualization and Access Management: Sight drives better security. If you can see what’s on your network and how it all connects, you can better identify where potential threats may occur. As a result, companies must deploy tools that offer complete visibility across all network environments and provide robust access control to ensure the right people have access to the right resources.
  3. Consistent Network Compliance: Today’s organizations must follow standards such as the Payment Card Industry Data Security Standard (PCI DSS) and cybersecurity maturity model certification, along with legislation including the General Data Protection Regulation (GPDR) and California Consumer Privacy Act (CCPA). Adhering to these standards and mandates is essential to demonstrate due diligence and protect your organization against penalties or legal action if security breaches do occur.
  4. Critical Vulnerability Prioritization: The scope and scale of new attack vectors make security triage a priority. End-to-end assessment of potential network risks based on exposure and access can help your teams prioritize vulnerabilities and design effective response frameworks.

Closing the Security Gap

No matter your business size, specialization, or industry, you need a more secure and intelligent network. Informed by increasingly complex IT environments and driven by evolving attack vectors, malicious actors are finding — and exploiting — new ways to compromise critical functions. Intelligent response is now critical to increase user confidence, and you must capture key data and protect your network.

RedSeal can help you close the security gap with an adaptable and intelligent approach to network security. From cloud security frameworks to robust network compliance solutions, access and visibility tools, and critical vulnerability prioritization, we have the technology tools and expertise to help your team build a reliable and responsive security framework.

Increase intelligence, navigate network security challenges and reduce real-life risks with RedSeal. Let’s get started.

How Security Vulnerabilities Expose Thousands of Cloud Users to Attacks

Cloud computing has revolutionized data storage and access. It’s led the charge for digital transformation and allowed the increased adoption of remote work. At the same time, however, cloud computing has also increased security risks.

As networks have grown and cloud resources have become more entrenched in workflow, cloud computing has created larger potential attack surfaces. To safeguard their mission-critical data and operations, organizations need to know chief cloud cyber risks and have to combat them.

Why Cloud Users Are at Risk

Cloud platforms are multi-tenant environments. They share infrastructure and resources across thousands of customers. While a cloud provider acts to safeguard its infrastructure, that doesn’t address every cloud user’s security needs.

Cybersecurity in the cloud requires a more robust solution to prevent exposure. Instead of assuming that service providers will protect their data, customers must carefully define security controls for workloads and resources. Even if you’re working with the largest cloud service providers, new security vulnerabilities emerge every day.

For example, Microsoft says it invests about $1 billion in cybersecurity annually, but vulnerabilities still surface. Case in point: The technology giant warned thousand of cloud customers that threat actors might be able to read, change, or delete their main databases. Intruders could uncover database access keys and use them to grab administrative privileges. While fixing the problem, Microsoft also admitted it could not change the database access keys, and the fix required customers to create new ones. The burden was on customers to take action, and those that didn’t were vulnerable to cyberattacks.

What Type of Vulnerabilities Affect Cloud Customers?

Despite the security protections cloud providers employ, cloud customers must use best practices to manage their cyberattack protection.

Without a solid security plan, multiple vulnerabilities can exist, including:

1. Misconfigurations

Misconfigurations continue to be one of the biggest threats for cloud users. A few examples:

  • A breach at Prestige Software due to a misconfiguration using Amazon S3 services caused widespread data compromise. This single event exposed a decade’s worth of customer data from popular travel sites, such as Expedia, Hotels.com, and Booking.com.
  • A misconfigured firewall at Capital One put the personal data of 100 million customers at risk.

2. Access Control

Poor access control allows intruders to bypass weak authentication methods. Once inside the network, many organizations do not adequately restrict lateral movement or access to resources. For example, security vulnerabilities in Amazon Web Services (AWS) put up to 90% of S3 buckets at risk for identity compromise and ransomware. The problem? Businesses failed to remove permissions that allowed users to escalate privileges to admin status.

3. Insecure APIs

APIs require access to business data but can also provide vectors for threat actors. Organizations may have hundreds or even thousands of public APIs tied to microservices, leading to a large attack surface. Insecure APIs are cited as the cause of the infamous Equifax breach, which exposed nearly 150 million consumers’ records, along with security lapses at Geico, Facebook, Peloton, and Experian.

4. Lack of Shared Responsibility

Cloud providers manage the security of the cloud, but customers are responsible for handling the security of the data stored in the cloud. Yet, many users fail to keep up their end of this shared responsibility. According to Gartner, 99% of cloud security failures are due to customer errors.

5. Vendors or Third-Party Software

Third-party cloud components mean your networks are only as secure as your vendor’s security protocols. If they are compromised, it may provide a pathway for attackers into your network.

More than half of businesses have seen a data breach caused by a third party. That’s what happened to Audi, Volkswagen, and dozens of others. The infamous REvil ransomware group exploited a vulnerability in Kaseya, a remote monitoring platform, and used it to attack managed service providers (MSPs) to gain access to thousands of customers.

How Can Cloud Users Protect Themselves?

With the acceleration of remote workers and hybrid cloud and multicloud environments, attack surfaces have increased greatly over the past few years. At the same time, hackers have become more sophisticated in their methods.

Since most security tools only work in one environment, it can create a complex web that becomes difficult to manage.

Figuring out how to prevent cyberattacks requires a multi-pronged approach, but it starts with understanding how all of your security tools work together across on-prem, public clouds, and private clouds. You need strategies to monitor all of your networks, including ways to:

  • Interpret access controls across both cloud-native and third-party firewalls (service chaining)
  • Continuously validate and ensure security compliance
  • Manage network segmentation policies and regulations

Security teams must be able to answer these concerns:

  • What resources do we have across our cloud and on-premises environments?
  • What access is possible?
  • Are resources exposed to the public internet?
  • Do our cloud deployments meet best practices for cybersecurity?
  • Do we validate cloud network segmentation policies?

Without a comprehensive cybersecurity solution that evaluates and identifies potential risks, it will be challenging to mitigate vulnerabilities and identify the downstream impacts from security lapses. Even if you believe you have every security measure you need in place across all of your cloud resources, you need a way to visualize resources, identify potential risks, and prioritize threat mitigation.

A Comprehensive Cloud Security Posture Management Solution

Solving a problem starts with identifying it. You need a way to visualize potential vulnerabilities across your networks and cloud resources.

A Cloud Security Posture Management (CSPM) solution will identify vulnerabilities, such as misconfigurations, unprotected APIs, inadequate access controls, and flag changes to security policies. This helps you better understand exposure risks, create more robust cloud segmentation policies, and evaluate all of your cloud vulnerabilities.

Many CSPM solutions, however, only present their finding in static, tabular forms. It can be challenging to understand relationships and gain full awareness of the interconnectivity between cloud resources. Beyond just monitoring traffic, security teams also need to see how instances get to the cloud, what security points it goes through, and which ports and protocols apply.

RedSeal Classic identifies what’s on your network environments and how it’s all connected. This helps you validate security policies and prioritize potential vulnerabilities. RedSeal Classic can evaluate AWS, Azure, Google Cloud, and Oracle Cloud environments along with Layers 2, 3, 4, and 7 in your physical networks for application-based policies and endpoint information from multiple sources.

RedSeal Stratus allows users to visualize their AWS cloud and Elastic Kubernetes Service (EKS) inventory. We’re currently offering an Early Adopters program for RedSeals Stratus, our SaaS-based CSPM, including concierge onboarding service, so you can see the benefits first-hand.

To learn more about how RedSeal can help you see how your environment is connected and what’s at risk, request a demo today.

Doing More with Less: Consolidating Your Security Toolkit

Cyber threats are fast-evolving, and organizations must stay vigilant at all times to protect their business-critical information from prying eyes. One oversight or outdated control could expose your network to different types of cyberattacks, leading to costly breaches.

Information security has become even more challenging in the past year as organizations had to shift their IT budget to tackle the sudden changes brought on by the COVID-19 pandemic. As the dust settles, many security teams are left with a smaller cybersecurity budget. The constraints are affecting staffing decisions and technology adoption. Today, many IT departments are stretched thin, making it even harder to be proactive about their security measures. However, organizations can consolidate their security toolkits and conserve funds while weathering the storm.

The Problem: Tight Budgets, Reduced Staffing, Increased Threats

To cope with new business demands, many organizations had to restructure their IT budgets, leaving less funding and fewer team members. Meanwhile, the number of cyberattacks has increased significantly since the pandemic. Many organizations had to respond quickly to support remote working, leaving security gaps and vulnerabilities in their networks. Additionally, the proliferation of devices used by remote workers increases the attack surface dramatically while making it even harder for security teams to gain a holistic view of their environments.

Furthermore, the fast pace of digital transformation has accelerated cloud adoption. Yet, cloud security is complex and distributed. There’s an exponential growth in misconfigurations of cloud security settings, which leave sensitive data and resources unintentionally exposed to the public internet.

To plug security holes quickly, companies cobbled together multiple point solutions. While this approach may seem reasonable in a pinch, security teams soon realized they have to piece together data from various sources to analyze threats and parse through duplicate alerts to get to the bottom of an issue. Using multiple security tools is time-consuming and labor-intensive and drastically increases response time.

This heavy reliance on digital assets and processes, along with the complexity of cybersecurity and the distributed nature of cloud computing, has created the perfect storm where threat actors can exploit various vulnerabilities to attack organizations and steal their data.

How Organizations Can Weather the Cybersecurity Storm

Companies are under constant pressure to do more with less when it comes to cybersecurity. But piling on more point solutions will only add inefficiency to already overwhelmed IT resources.

To improve performance on a tight budget, you must direct resources to focus on the interaction between technologies, systems, and processes. You can achieve this most effectively by consolidating your existing security tools into a single pane of glass solution, which gives you a holistic view of your environment.

The Benefits of Consolidating Your Security Toolkit

From saving money to improving your security, here are the advantages of consolidating your cybersecurity tools:

  • Reduce vulnerability. Each security system that connects to your network is a potential vulnerability. Using different tools can actually increase your attack surface and make your IT infrastructure less secure.
  • Lower total cost of ownership. The cost of point solutions can add up quickly. By using fewer tools, you can spend less on these products while saving on training, management, and maintenance.
  • Increase IT productivity. Point solutions often have overlapping functionalities and generate duplicate alerts. IT teams have to spend extra time sorting through all the information before taking action.
  • Reduce resource needs. A consolidated toolkit requires fewer resources to operate and monitor. The streamlined workflows also help free up IT resources to respond to critical issues.
  • Shorten response time. A single pane of glass view helps minimize duplicate or missed alerts, allowing security teams to identify issues and respond more quickly.
  • Improve cost-efficiency. Consolidation and automation simplify IT management so you can perform system backup, maintenance, monitoring, and other essential functions more efficiently.
  • Eliminate silos. Tool sprawl can create silos between teams. A consolidated toolkit helps you improve visibility, enhance collaboration, and gain a holistic understanding of your entire IT infrastructure.

How to Consolidate Your Security Toolkit

Start by designing a strategy, conducting a risk assessment, and performing a gap analysis to identify what you need in a consolidated security solution. Apply security frameworks (e.g., NIST-800 and ISO 27001) and refer to compliance standards (e.g., HIPAA, PCI-DSS, DFARS) to determine your cybersecurity requirements.

Then, take stock of all the features you’re using in the current point solutions. Your consolidated toolkit should cover these functionalities without compromising the ability to safeguard your networks, systems, applications, data, and devices.

Use a solution provider that understands your strategy and can help you design a solution that integrates with your existing infrastructure to reduce friction during implementation and migration. Your partner should also help you address the human change elements during the adoption process by providing training guides and ongoing support.

Strengthen Your Cybersecurity Posture Through Consolidation

There are many benefits to consolidating your security toolkit, including better security, improved IT productivity, and higher cost-efficiency. But not all security solutions are created equal.

To cover all your bases, choose a consolidated solution that addresses these critical aspects:

  • Cloud security. Your toolkit should allow you to visualize all your environments, including public cloud, private cloud, and on-premise servers, all in one place.
  • Incident response. Your solution should help you detect network incidents, facilitate investigations, and offer containment options to minimize loss.
  • Compliance monitoring and reporting. Your security tool should automate monitoring and document any changes you implement to help streamline security audits and compliance reporting.
  • Remote workforce support. Your vendor should ensure that your networks and cloud platforms have the appropriate security configurations to ensure secure remote access.
  • Vulnerability management. Your tool should visualize all network assets, so you can understand the context and focus resources on mitigating risks that are of the highest priority.

RedSeal offers comprehensive cybersecurity solutions in today’s business environment where cyber complexity and threats are rapidly escalating. Global 2000 corporations and government agencies trust us to help them secure their networks and assets.

Watch our demo to see how we can help you get all your cybersecurity needs covered.

Future-Proofing Your Security Infrastructure

Cybersecurity is getting more complicated every day. Why is this happening? Organizations are seeing their infrastructure becoming more complex, attack surfaces growing dramatically, and threats from cybercriminals evolving. What’s more, the reliance on public cloud, private cloud, hybrid cloud, and multi-cloud environments — coupled with more remote workers — has expanded the security perimeter for many organizations.

Even before COVID burst onto the scene, cybercrime was on the rise. Instead of a lone hacker sitting in a dark basement, contemporary cyber threat actors are part of organized crime rings.

All these trends underscore the importance of future-proofing your security infrastructure to combat major security threats and protect your mission-critical data.

Cyberattacks Are on the Rise: Data Tells the Tale

From Solar Winds to the Colonial Pipeline attack, cybercriminals have been making headlines in recent years. In addition, statistics reveal that cyberattacks are an ever-growing problem:

Attacks are more prevalent, and they are getting more expensive. The average cost of a data breach now exceeds $4.2 million per incident and can cause recurring problems for years. On average, more than $2.9 million is lost to cybercrime every minute.

Despite increased spending on cybersecurity and best efforts by chief information security officers (CISOs) and information technology (IT) teams, nearly 80% of senior IT leaders believe their organizations lack sufficient protection against cyber-attacks. With the rising threat, every organization needs a strategy to future-proof its infrastructure.

What is Future-Proofing?

Future-proofing your cyber security creates a robust foundation that can evolve as your organization grows and new cyber threats emerge. This includes continually assessing your infrastructure for security gaps, proactively identifying threats, and remediating potential weaknesses.

Future-proof planning encompasses the totality of your security efforts. Failure to plan puts your entire organization at risk. You simply cannot afford to be left unprotected against current and future threats.

What Can (and Can’t) Be Future-Proofed within Your Technology Infrastructure?

What makes future-proofing technology challenging is that we don’t know exactly what the IT landscape will look like in the future. A few years ago, who knew we would see the explosion in the number of remote employees  — often working on unprotected home networks.

The good news is that the cloud has given us tremendous flexibility and helps us future-proof without overspending right now on capacity we may or may not need. With nearly infinite scalability, cloud applications have allowed organizations to adapt and grow as necessary. However, it’s also put more sensitive and proprietary data online than ever before and made IT infrastructure more complex.

To future-proof your infrastructure, you need an approach for visualizing, monitoring, and managing security risks across every platform and connection. This lets you expand your security perimeter as your network grows and proactively identify new exposure as you evolve.

How Can Organizations Prepare for the Future?

Security needs to be part of every company’s DNA. Before you make any business decisions, you should run through security filters to ensure the right safeguards are in place. It takes a security culture that goes beyond the IT departments to future-proof your organization.

With data in the cloud, there’s a shared security responsibility. For example, public cloud providers take responsibility for their cloud security, but they are not responsible for your apps, servers, or data security. Too many companies are still relying on cloud providers to protect assets and abdicating their part of the shared security model.

Between multi-cloud, hybrid cloud environments, and a mix of cloud and on-prem applications, it’s become increasingly difficult to track and manage security across every platform. Many security tools only work in one of these environments, so piecing together solutions is also challenging.

For example, do you know the answers to these questions:

  • What resources do we have across all our public cloud and on-premises environments?
  • Are any of these resources unintentionally exposed to the internet?
  • What access is possible within and between cloud and on-premises environments?
  • Do our cloud deployments meet security best practices?
  • How do we validate our cloud network segmentation policies?
  • Are we remediating the riskiest vulnerabilities in the cloud first?

An in-depth visualization of the topology and hierarchy of your infrastructure can uncover vulnerabilities, identify exposure, and provide targeted remediation strategies.

You also need a cloud security solution to identify every resource connected to the internet. Whether you’re using AWS, Microsoft Azure, Google Cloud, Oracle Cloud, or other public cloud resources along with private cloud and on-prem resources, you need a holistic view of security.

Traditional security information and event management (SEIM) systems often produce a large volume of data, making it unwieldy to identify and isolate the highest priority concerns. You need a network model across all resources to accelerate network incident response and quickly locate any compromised device on the network.

Another necessity is continuous penetration tests to measure your state of readiness and re-evaluate your security posture. This helps future-proof your security as you add resources and new threats emerge.

Create a Secure Future for Your Organization

Creating a secure future for your organization is essential. As IT infrastructure and connectivity become more complex, attack surfaces continue to grow, and cybercriminals evolve their tactics, the risks are too great for your company, customers, and career not to build a secure foundation. You need to do more than plan your response to an incident and must know how to prevent cyberattacks with proactive security measures.

Secure all your network environments — public clouds, private clouds, and on-premises — in one comprehensive, dynamic visualization. That’s Red Seal.

RedSeal — through its cloud security solution and professional services — helps government agencies and Global 2000 companies measurably reduce their cyber risk by showing them what’s in all their network environments and where resources are exposed to the internet. RedSeal verifies that networks align with security best practices, validates network segmentation policies, and continuously monitors compliance with policies and regulations.

Contact Red Seal today to take a test drive.

Mitigating Cloud Security’s Greatest Risk: Exposure

Cloud security is complex and distributed. Implementing security controls across on-premise environments traditionally sits with the information security team, but in the cloud, the responsibility could be distributed across developers, DevOps and InfoSec teams. DevOps and developers don’t primarily focus on security, and the impact is often seen as an increase in misconfigurations introducing the risk of breaches.

These security challenges in the cloud have become so prevalent that Gartner has defined cloud security posture management (CSPM) as a new category of security products designed to identify misconfiguration issues and risks in the cloud. CSPM tools today are relied on to provide visibility and compliance into the cloud infrastructure but still haven’t been able to address this issue at scale for InfoSec teams. These teams require solutions that can provide risk-based prioritized remediations in an automated way to handle the cloud scale and complexity. To determine which issues to remediate first, the InfoSec teams need to identify critical resources with unintended and accidental exposure to the internet and other untrusted parts of their cloud.

Calculating Exposure Considering All Security Controls

Whether they are on-prem or in the cloud, security professionals worry about getting breached. One recent report said 69% of organizations admit they had experienced at least one cyber-attack that started by exploiting an unknown or unmanaged internet-facing asset. Bad actors can now simply scan the perimeter of your cloud, look for exposed things and get into your network this way.

Cloud security providers (CSPs) like Amazon Web Service and Microsoft Azure have attempted to solve security by developing their own sets of controls, ranging from implementing security groups and network access control lists (NACLs) to developing their own native network firewalls.

Cloud-first companies often rely on these native tools from the CSPs, but for others who aren’t as far along on their cloud journey, making the transition from traditional on-prem to cloud workloads means pulling along their network security practitioners with them. These teams, who often aren’t cloud experts, are responding by deploying third-party firewalls and load balancers in the cloud due to their longstanding familiarity with them from the on-prem world.

Furthermore, the rise of application containerization with Kubernetes (and its corresponding flavors from AWS, Azure and Google Cloud) allows additional security controls such as pod security policies and ingress controllers.

These security controls are invaluable tools for security teams scrambling to secure their sprawling cloud environments and some under the control of development and DevOps teams. Still, they are largely unaccounted for by current CSPM tools when attempting to assess unintended exposure risk.

Current CSPM Solutions Don’t Accurately Calculate Access

Existing solutions look for misconfigurations at the compute or container level but don’t truly understand end-to-end access from critical resources to an untrusted network. They are essentially calling into the APIs of CSPs, and so if the setting in AWS for a particular subnet equals “public,” the tool believes there is exposure to the internet. That’s not necessarily true because a security team may have other controls in place, like a 3rd party firewall or Kubernetes security policy that successfully prevents access, or the security control is not in the path to the critical resources and not protecting them.

The result is that already short-staffed security teams are spending their days chasing security issues that do not impact the organization the most. The question to ask of today’s CSPM products is whether they are repeating data from CSPs based on their settings or accurately calculating effective reachability to their critical resources (and through which specific controls). Security teams need accurate and complete information to inform their remediation options, which can identify CSP-native security groups to specific ports and protocols controlling the access that may allow exposure to occur.

Increasing cloud complexity is making security as challenging as ever. The ability to quickly identify at-risk resources would go a long way in preventing many potential data breaches. Still, the approach that current tools take is incomplete and disregards much of what security teams are already doing to address the problem. Tools need to account for all security controls in place if security teams are to have truly accurate information on which to act.

For more information on RedSeal Stratus, our new CSPM solution, check out our website or sign up for our Early Adopters program.