Tag Archive for: Network Security

Zero Trust: Back to Basics

The Executive Order on Improving the Nation’s Cybersecurity in 2021 requires agencies to move towards zero trust in a meaningful way as part of modernizing infrastructure. Yet, federal agencies typically find it challenging to implement zero trust. While fine in theory, the challenge often lies in the legacy systems and on-premises networks that exist with tendrils reaching into multiple locations, including many which are unknown.

Identity management and authentication tools are an important part of network security, but before you can truly implement zero trust, you need an understanding of your entire infrastructure. Zero trust isn’t just about identity. It’s also about connectivity.

Take a quick detour here. Let’s say you’re driving a tractor-trailer hauling an oversized load. You ask Google Maps to take you the fastest route and it plots it out for you. However, you find that one of the routes is a one-lane dirt road and you can’t fit your rig. So, you go back to your mapping software and find alternate routes. Depending on how much time you have, the number of alternative pathways to your final destination is endless.

Computer security needs to think this way, too. Even if you’ve blocked the path for threat actors in one connection, how else could they get to their destination? While you may think traffic only flows one way on your network, most organizations find there are multiple pathways they never knew (or even thought) about.

To put in efficient security controls, you need to go back to basics with zero trust. That starts with understanding every device, application, and connection on your infrastructure.

Zero Trust Embodies Fundamental Best-Practice Security Concepts

Zero trust returns to the basics of good cybersecurity by assuming there is no traditional network edge. Whether it’s local, in the cloud, or any combination of hybrid resources across your infrastructure, you need a security framework that requires everyone touching your resources to be authenticated, authorized, and continuously validated.

By providing a balance between security and usability, zero trust makes it more difficult for attackers to compromise your network and access data. While providing users with authorized access to get their work done, zero-trust frameworks prevent unauthorized access and lateral movement.

By properly segmenting your network and requiring authentication at each stage, you can limit the damage even if someone does get inside your network. However, this requires a firm understanding of every device and application that are part of your infrastructure as well as your users.

Putting Zero Trust to Work

The National Institute of Standards and Technology (NIST) Risk Management Framework publication 800-207 provides the conceptual framework for zero trust that government agencies need to adopt.

The risk management framework has seven steps:

  1. Prepare: mapping and analyzing the network
  2. Categorize: assess risk at each stage and prioritize
  3. Select: determine appropriate controls
  4. Implement: deploy zero trust solutions
  5. Assess: ensure solutions and policies are operating as intended
  6. Authorize: certify systems and workflow are ready for operation
  7. Monitor: provide continuous monitoring of security posture

In NIST’s subsequent draft white paper on planning for a zero-trust architecture, it reinforces the crucial first step, which is mapping the attack surface and identifying the key parts that could be targeted by a threat actor.

Instituting zero trust security requires detailed analysis and information gathering on devices, applications, connectivity, and users. Only when you understand how data moves through your network and all the different ways it can move through your network can you implement segmentation and zero trust.

Analysts should identify options to streamline processes, consolidate tools and applications, and sunset any vulnerable devices or access points. This includes defunct user accounts and any non-compliant resources.

Use Advanced Technology to Help You Perform Network Analysis

Trying to map your network manually is nearly impossible. No matter how many people you task to help and how long you have, things will get missed. Every device, appliance, configuration, and connection has to be analyzed. Third parties and connections to outside sources need to be evaluated. At the same time you’re conducting this inventory, things are in a constant state of change which makes it even easier to miss key components.

Yet, this inventory is the foundation for implementing zero trust. If you miss something, you leave security gaps within your infrastructure.

The right network mapping software for government agencies can automate this process by going out and gathering the information for you. Net mapping analysis can calculate every possible pathway through the network, taking into account NATS messaging and load balancing. During this stage, most organizations uncover a surprising number of previously unknown pathways. Each connection point needs to be assessed for need and whether it can be closed to reduce attack surfaces.

Automated network mapping will also provide an inventory of all the gear on your network and IP space in addition to your cloud and software-defined network (SDN) assets. Zero trust requires you to identify who and what can access your network, and who should have that access.

Once you have conducted this exhaustive inventory, you can then begin to implement the zero-trust policies with confidence.

Since your network is in a constant state of evolution with new users, devices, applications, and connectivity being added, changed, or revised, you also need continuous monitoring of your network infrastructure to ensure changes remain compliant with your security policies.

Back to the Basics

The conversation about zero trust often focuses narrowly on identity. Equally important are device inventory and connectivity. The underlying goal of zero trust is allowing only specific authorized individuals to access specific things on specific devices. Before you can put in place adequate security controls, you need to know about all of the devices and all the connections.

RedSeal provides network mapping, inventory, and mission-critical security and compliance services for government agencies and businesses and is Common Criteria certified. To learn more about implementing a zero-trust framework, you need to better understand the challenges and strategies for successful zero-trust implementation.

Download our Zero Trust Guide today to get started.

Keep it Separate, Keep it Safe: How to Implement and Validate Cloud Network Segmentation

The distributed nature of cloud computing makes it a must-have for business, thanks to on-demand resource availability, network connectivity, and compute scalability.

But the cloud also introduces unique security challenges. First is a rapidly-expanding attack surface: As the number of connected third-party services powered by open-source code and APIs increases, so does the risk of compromise. According to the 2021 IBM Security X-Force Cloud Threat Landscape Report, more than 1,200 of the 2,500 known cloud vulnerabilities had been found within the proceeding 18 months. Additionally, 100 percent of penetration testing efforts by IBM X-Force teams found issues with cloud policies or passwords.

Cloud network segmentation offers a way for companies to reduce the risk of cloud threats. By dividing larger networks into smaller subnets — each of which can be managed individually — businesses can boost protection without sacrificing performance. Here’s how it works.

Why Is Cloud Network Segmentation Valuable to Network Security?

Cloud segmentation is part of larger defense-in-depth (DiD) security practices that look to lower total risk by creating multi-layered frameworks which help protect key data from compromise. DiD is built on the concept that there’s no such thing as a “perfect” security solution — since, with enough time and patience, attackers can compromise any protective process. By layering multiple security measures onto network access points or data storage locations, however, the effort required for compromise increases exponentially, in turn reducing total risk.

And by breaking larger cloud networks down into smaller subnets, the scale of necessary defense decreases, making it possible for teams to differentiate lower-risk subnets from those that need greater protection. Segmentation offers practical benefits for businesses.

Reduced Complexity

Segmenting larger cloud frameworks into smaller cloud networks allows teams to reduce the overall complexity that comes with managing cloud solutions at scale. Instead of trying to find one policy or process that works for cloud networks end-to-end — without introducing security risks to protected data or limiting users’ ease of access — teams can create purpose-built security policies for each network segment.

Increased Granular Control

Segmentation also offers more granular control over network defenses. For example, teams could choose to deploy next-generation firewall tools, such as those capable of discovering and analyzing specific user behaviors, or implement runtime application self-protection (RASP) functions on a case-by-case basis.

Improved Responsiveness

Smaller subnets additionally make it possible for IT professionals to identify and respond to security issues quickly. Here’s why: Given the geographically disparate nature of cloud services — one provider might house their servers locally, while another might be states or countries away — tracking down the root cause of detected issues becomes like finding a digital needle in a virtual haystack. While it’s possible using advanced detection tools and techniques, it could take days or weeks. Segmentation, meanwhile, allows teams to identify and respond to issues on a segment-by-segment basis quickly.

Enhanced Operations

Network segmentation also helps companies enhance operations by aligning with cloud security best practices such as zero trust. Under a zero trust model, user identity is never assumed; instead, it must be proven and verified through authentication. Segmentation makes it possible to apply zero trust where necessary — such as gaining access to network segments that store personally identifiable information (PII) or intellectual property (IP) — in turn helping streamline cloud access without introducing security risk.

How to Implement Network Segmentation

Network segmentation isn’t a new concept — companies have been leveraging physical segmentation of networks for years to reduce the impacts of a potential breach. As the name implies, this type of segmentation uses physical controls such as firewalls to create separate subnets and control traffic flows.

Cloud segmentation, meanwhile, comes with a bigger challenge: Creating network segments across digital environments that may be separated by substantial physical distance. As a result, cloud segmentation was often deemed too complex to work since the sheer amount of unique cloud services, solutions, and environments combined with the dynamic nature of cloud resources meant it was impossible to effectively portion out and protect these subnets.

With the right strategy, however, it’s possible for businesses to both segment and secure their cloud networks. Here, logical rather than physical segmentation is vital. Using either virtual local area networks (VLANs) or more in-depth network addressing schemes, IT teams can create logical subnetworks across cloud services that behave as if they’re physically separate, in turn increasing overall defense.

Worth noting? Validation of these virtual networks is critical to ensure protective measures are working as intended. In practice, this means deploying tools and technologies that make it possible to visualize access across all network environments — local or otherwise — to understand network topology and explore traffic paths. Validation also requires the identification and remediation of issues as they arise. Consider a subnet that includes multiple cloud services. If even one of these services contains vulnerabilities to threats such as Log4j, the entire subnetwork could be at risk. Regular vulnerability scanning paired with active threat identification and remediation is critical to ensure segmentation delivers effective security.

Closing the Cloud Security Gap with RedSeal

Cloud solutions offer the benefit of any time, anywhere access coupled with scalable, on-demand resources. But clouds also introduce unique security challenges around user access, data protection, and security threat monitoring.

As a result, protecting data in the cloud requires a defense-in-depth strategy that creates layers of protection rather than relying on a single service or technology to defend against evolving threats. Cloud network segmentation is one key component in this DiD strategy — by logically segmenting cloud services into smaller and more manageable networks, companies can reduce complexity, increase control and improve responsiveness.

But segmentation alone isn’t enough; enterprises also need the ability to visualize multiple micro-networks at scale, identify potential issues and quickly remediate concerns.

Ready to get started? Discover how RedSeal can help visualize, verify and validate your cloud network segmentation. Watch a Demo.

How Security Vulnerabilities Expose Thousands of Cloud Users to Attacks

Cloud computing has revolutionized data storage and access. It’s led the charge for digital transformation and allowed the increased adoption of remote work. At the same time, however, cloud computing has also increased security risks.

As networks have grown and cloud resources have become more entrenched in workflow, cloud computing has created larger potential attack surfaces. To safeguard their mission-critical data and operations, organizations need to know chief cloud cyber risks and have to combat them.

Why Cloud Users Are at Risk

Cloud platforms are multi-tenant environments. They share infrastructure and resources across thousands of customers. While a cloud provider acts to safeguard its infrastructure, that doesn’t address every cloud user’s security needs.

Cybersecurity in the cloud requires a more robust solution to prevent exposure. Instead of assuming that service providers will protect their data, customers must carefully define security controls for workloads and resources. Even if you’re working with the largest cloud service providers, new security vulnerabilities emerge every day.

For example, Microsoft says it invests about $1 billion in cybersecurity annually, but vulnerabilities still surface. Case in point: The technology giant warned thousand of cloud customers that threat actors might be able to read, change, or delete their main databases. Intruders could uncover database access keys and use them to grab administrative privileges. While fixing the problem, Microsoft also admitted it could not change the database access keys, and the fix required customers to create new ones. The burden was on customers to take action, and those that didn’t were vulnerable to cyberattacks.

What Type of Vulnerabilities Affect Cloud Customers?

Despite the security protections cloud providers employ, cloud customers must use best practices to manage their cyberattack protection.

Without a solid security plan, multiple vulnerabilities can exist, including:

1. Misconfigurations

Misconfigurations continue to be one of the biggest threats for cloud users. A few examples:

  • A breach at Prestige Software due to a misconfiguration using Amazon S3 services caused widespread data compromise. This single event exposed a decade’s worth of customer data from popular travel sites, such as Expedia, Hotels.com, and Booking.com.
  • A misconfigured firewall at Capital One put the personal data of 100 million customers at risk.

2. Access Control

Poor access control allows intruders to bypass weak authentication methods. Once inside the network, many organizations do not adequately restrict lateral movement or access to resources. For example, security vulnerabilities in Amazon Web Services (AWS) put up to 90% of S3 buckets at risk for identity compromise and ransomware. The problem? Businesses failed to remove permissions that allowed users to escalate privileges to admin status.

3. Insecure APIs

APIs require access to business data but can also provide vectors for threat actors. Organizations may have hundreds or even thousands of public APIs tied to microservices, leading to a large attack surface. Insecure APIs are cited as the cause of the infamous Equifax breach, which exposed nearly 150 million consumers’ records, along with security lapses at Geico, Facebook, Peloton, and Experian.

4. Lack of Shared Responsibility

Cloud providers manage the security of the cloud, but customers are responsible for handling the security of the data stored in the cloud. Yet, many users fail to keep up their end of this shared responsibility. According to Gartner, 99% of cloud security failures are due to customer errors.

5. Vendors or Third-Party Software

Third-party cloud components mean your networks are only as secure as your vendor’s security protocols. If they are compromised, it may provide a pathway for attackers into your network.

More than half of businesses have seen a data breach caused by a third party. That’s what happened to Audi, Volkswagen, and dozens of others. The infamous REvil ransomware group exploited a vulnerability in Kaseya, a remote monitoring platform, and used it to attack managed service providers (MSPs) to gain access to thousands of customers.

How Can Cloud Users Protect Themselves?

With the acceleration of remote workers and hybrid cloud and multicloud environments, attack surfaces have increased greatly over the past few years. At the same time, hackers have become more sophisticated in their methods.

Since most security tools only work in one environment, it can create a complex web that becomes difficult to manage.

Figuring out how to prevent cyberattacks requires a multi-pronged approach, but it starts with understanding how all of your security tools work together across on-prem, public clouds, and private clouds. You need strategies to monitor all of your networks, including ways to:

  • Interpret access controls across both cloud-native and third-party firewalls (service chaining)
  • Continuously validate and ensure security compliance
  • Manage network segmentation policies and regulations

Security teams must be able to answer these concerns:

  • What resources do we have across our cloud and on-premises environments?
  • What access is possible?
  • Are resources exposed to the public internet?
  • Do our cloud deployments meet best practices for cybersecurity?
  • Do we validate cloud network segmentation policies?

Without a comprehensive cybersecurity solution that evaluates and identifies potential risks, it will be challenging to mitigate vulnerabilities and identify the downstream impacts from security lapses. Even if you believe you have every security measure you need in place across all of your cloud resources, you need a way to visualize resources, identify potential risks, and prioritize threat mitigation.

A Comprehensive Cloud Security Posture Management Solution

Solving a problem starts with identifying it. You need a way to visualize potential vulnerabilities across your networks and cloud resources.

A Cloud Security Posture Management (CSPM) solution will identify vulnerabilities, such as misconfigurations, unprotected APIs, inadequate access controls, and flag changes to security policies. This helps you better understand exposure risks, create more robust cloud segmentation policies, and evaluate all of your cloud vulnerabilities.

Many CSPM solutions, however, only present their finding in static, tabular forms. It can be challenging to understand relationships and gain full awareness of the interconnectivity between cloud resources. Beyond just monitoring traffic, security teams also need to see how instances get to the cloud, what security points it goes through, and which ports and protocols apply.

RedSeal Classic identifies what’s on your network environments and how it’s all connected. This helps you validate security policies and prioritize potential vulnerabilities. RedSeal Classic can evaluate AWS, Azure, Google Cloud, and Oracle Cloud environments along with Layers 2, 3, 4, and 7 in your physical networks for application-based policies and endpoint information from multiple sources.

RedSeal Stratus allows users to visualize their AWS cloud and Elastic Kubernetes Service (EKS) inventory. We’re currently offering an Early Adopters program for RedSeals Stratus, our SaaS-based CSPM, including concierge onboarding service, so you can see the benefits first-hand.

To learn more about how RedSeal can help you see how your environment is connected and what’s at risk, request a demo today.

The Eyes Have It: Six Commonly Overlooked Cybersecurity Threats

It’s been a banner year for cybersecurity threats. According to the Identity Theft Resource Center  (ITRC), the number of breaches reported as of September 30th, 2021, already exceeds the total number of breaches in 2020. And while rapid shifts to remote and hybrid work are partly responsible for this increase, attackers are also taking this opportunity to expand their efforts and find new ways to confuse security tools, confound infosec defenders and compromise critical services.

The result? Even with a focus on security, businesses often overlook cybersecurity threats that could cause substantial harm. Here’s a look at six commonly overlooked concerns and what companies can do to mitigate the risk.

The State of Cybersecurity in 2021

In many respects, 2021 has marked a return to form for attackers — threats such as phishing and ransomware are on the rise, as are the use of advanced persistent threats (APTs) to conduct reconnaissance and collect data. The result is a familiar landscape for information security professionals: Teams need to establish and maintain defensive systems capable of detecting, identifying, and removing common threats.

But there’s also an evolution of attacker efforts. Not only are they broadening their horizons, but they’re also selecting new targets: Small and midsize businesses now account for more than 70 percent of all attacks. With many of these businesses now storing valuable personal and financial data but often lacking specialized IT teams and robust infrastructure, attackers are more likely to get in — and get out — without being noticed.

The result is a changing security landscape that requires both active observation and robust response from IT teams. Unfortunately, continual monitoring for common threats often shifts the focus to the growing forest of technology threats — and leaves companies struggling to see the trees.

Six Overlooked Security Threats

Despite best efforts, it’s easy for teams to overlook cybersecurity vulnerabilities. Six of the most commonly neglected threats include:

1. Ineffective Encryption

Encryption remains a front-line defense against both familiar and overlooked security threats. If attackers can’t use data they steal, its value to them is significantly reduced. The challenge? Many businesses still rely on outdated encryption models that are easily circumvented or fail to consider the continuous movement of data across internal networks and external connections.

2. Open Source Solutions

Open source tools and application programming interfaces (APIs) are great ways for companies to reduce the work required to build new apps and services. But there is a caveat. These open solutions may contain critical vulnerabilities that could be exploited to compromise critical data.

3. Phishing 2.0

While phishing efforts remain popular, attackers now realize the need for innovation as businesses become more security-savvy. As a result, the quality of phishing emails has increased substantially over the past few years. Gone are the obvious grammar and spelling mistakes. Instead, they’ve been replaced with socially-engineered data and details designed to fool even experienced team members.

4. IoT Interconnection

The Internet of Things (IoT) offers a way to connect mobile devices, sensors, and monitoring to help streamline operations. But this same interconnection creates an increased attack surface that provides malicious actors multiple points of compromise.

5. Malvertisements

Malvertising — the process of using online ads to spread malware — is once again on the rise. By injecting malicious ads into legitimate ad networks, attackers can compromise even well-defended networks to capture user behavior and log keystrokes.

6. Invisible Assets

What you don’t see can hurt you. This is especially problematic as companies expand into multiple cloud networks. More devices and apps mean less visibility, which in turn increases the chance of a successful attack.

Potential Harms of Unseen Threats

The potential harms of unseen threats are variable — the nature and depth of these threats speak to their impact at scale. In general, however, businesses face three broad harms if attacks are successful.

Operational Impacts

First up are operational impacts. Consider the SolarWinds attack reported in late 2020. Attackers actually compromised the company’s system much earlier last year, allowing them to conduct significant data collection and eventually exploit SolarWinds’ IT management platform, which more than 33,000 companies use. As a result, more than 18,000 companies were rendered vulnerable to cybersecurity attacks and had to interrupt operations temporarily to get systems back on track.

Compromised Compliance

The next potential harm of unseen threats is compromised compliance. If companies don’t have processes and procedures to detect and mitigate attacks ASAP, they may fail to meet security due diligence obligations as outlined in compliance regulations. Sanctions or fines can result.

Reputation Damage

Finally, unseen threats can lead to severe reputation damage. While customers are now willing to share their personal and financial data if businesses can offer increased personalization and improved service, they also have no patience for companies that lose or misuse this information. If attacks go undetected and consumer data is compromised, your business reputation may be irreparably damaged.

Four Steps to Mitigate Risk

While it’s impossible to predict every potential threat to your network — or account for the evolution of attack vectors — there are four steps companies can take to mitigate cybersecurity risk.

1. Discover your assets. What services and software are on your network? How do these solutions connect and interact with other operations? Locally? At scale? Complete asset analysis helps you discover what you have so you can protect what matters.

2. Conduct a vulnerability assessment. Next, you need to determine where your assets are vulnerable with an in-depth scan of all interconnected resources. This provides both increased visibility of detected assets and can also help uncover “blind spots” that need attention.

3. Triage your findings. Prioritization is the third step in this risk mitigation process. By considering potential severity and asset value along with upstream and downstream access requirements, your teams can prioritize defensive efforts.

4. Remediate your issues. Finally, you need a plan to remediate and mitigate overlooked issues. In practice, this includes the identification of precise access paths and devices that require updating or adjustment to isolate, contain and eliminate potential threats.

Keeping Your Eyes on the Prize

The goal of any infosec effort? To defend networks, services, and people from harm. Unfortunately, traditional tools can’t keep up with the volume and variety of cyberattacks in today’s environment. To maximize protection and stay ahead of potential threats, organizations need to boost visibility with vulnerability best practices that help teams zero in on overlooked cybersecurity threats.

See more to secure more: Learn more about Network Vulnerability Best Practices with RedSeal.

Where is the new “Security Stack” hiding?

Security challenges resulting from migrating the security stack to the cloud

The days of the traditional security stack are numbered, brought on by the maturity of shared resource computing and the rapid migration to the public cloud due to the COVID-19 pandemic. This blog will explore a brief history of fortification, its impact on the early internet security architectures, and today’s challenges. I’ll conclude with a few suggestions that every security professional should consider.

From the beginning, cave dwellings were used to protect that of value. Humans have long considered, planned, and implemented various fortification methods. A city wall built around valuable, trusted assets is commonplace from our very early history. Fortification walls were used to protect individuals, tribes, and countries and could be made more secure by adding additional layers. The extra layers of defense increased the protection by the means known as “defense in depth” whereby a compromise in one other layer would sufficiently hinder further advancement or retreat by the attacker.

Fast forward to the late 20th century, many Request for Comments (RFC) drafted, outlined the internet foundation by focusing on moving datagrams from point A to point B. The primary concern was redundancy, resiliency, and reliable delivery of information. However, in the last few years of the 20th century, three essential security concepts were explored: confidentiality, integrity, and availability, known as the “CIA Triad.” Think of CIA as security that attempts to ensure information from the sender can:

  1. only be read by the receiver
  2. while in transit, the data has not been changed or tampered with
  3. the information reaches the intended audience

The 21st century brought a flurry of security and technologies based on ancient, fortified city walls. These defense in depth architectures often made the incorrect assumption that data inherited implicit trust based on location. For instance, data inside a corporate network was not scrutinized equally to data outside the corporate network. These initial security tools – the “Security Stack” – were often placed at the ingress/egress points of the network to inspect, analyze, prioritize, route, and scan for nefarious activities or threats from outside the network perimeter.

The problem with relying on perimeter-based security alone is people. People have always been migratory, traveling beyond the city walls. Speaking for myself, I have worked remotely, assisting companies with network security for 20+ years. As a “road warrior”, my network connections are from hotels, public hotspots, and client networks that have traversed untrusted networks. To prevent unauthorized access, my company had had to apply additional security controls to allow me to be connected successfully behind the “security stack.”

Between 2006 and 2010, the concept of shared computing resources took hold, and the promise of more computing power for less cost fueled a steady adoption rate over the next decade. Cloud service providers (CSPs) like Amazon, Microsoft, Google, Oracle, and others saw a steady, predictable increase in the use of shared resources located within a CSPs network, A.K.A “Public Cloud Network.” However, with the advent of cloud computing, the lines between trusted and untrusted networks were further obscured, and the need for visibility into and across disparate networks became more evident.

2020 brought with it a pandemic that forced hundreds of millions of employees to connect from untrusted sources and work remotely, in many cases bypassing the traditional security stacks intended to provide defense in depth. Corporations faced an unforeseen lack of visibility and conventional tools failed.  This rapid migration of corporate workloads (applications) to cloud computing combined with a disintegration of the traditional security stack has resulted in an environment of ever-increasing attacks and ransomware.

Post pandemic, the traditional security stack has dispersed. Some components still reside in on-premises networks, some in the public/private clouds, some at the network perimeter edge, and some on the endpoint device. The critical lesson is that the “edge” is no longer the boundary of location. The new “edge” is now the boundary of information. Data is the new edge.

To achieve security in modern networks, visibility is now more critical than ever. Complex architectures based on, IaaS, PaaS, SaaS, and On-Premises resources combined with new wide-area transport systems like SD-WAN, and a myriad of security filters in the form of cloud regions, accounts, VPC/VNETs, Network ACLs, Security Groups, and tools like SASE (Secure Access Service Edge), and Transit Gateways are indeed the new modern “Security Stack.” To secure this modern-day infrastructure, the corporation needs unparalleled visibility, awareness of where vulnerabilities exist, and connectivity across all network clouds and on-premise.

Finally, here is a message for the CISO or security professional searching for solutions. Ask yourself the following questions and seek answers for any you are unsure of.

  1. How well do your security teams understand cloud inventory?
  2. How do you check to see if resources are unintentionally exposed to the internet?
  3. How do you validate cloud segmentation policies and remediate them?
  4. How do you prioritize vulnerabilities in a public cloud environment?

For tips on how to “Safeguard Your Cloud Journey with a Comprehensive Security Solution” download our data sheet.

Podcast: How network modeling helps operations and security teams mitigate risk

CyberScoop Radio | June 17, 2019

The Network Dimension in Vulnerability Management

By Kes Jecius, RedSeal Senior Consulting Engineer

The Center for Internet Security’s (CIS) third control for implementing a cybersecurity program is to practice continuous vulnerability management. Organizations that identify and remediate vulnerabilities on an on-going basis will significantly reduce the window of opportunity for attackers. This third control assumes you’ve implemented the first two CIS framework controls — understanding both the hardware that makes up your infrastructure and the software that runs on that infrastructure.

The first two controls are important to your vulnerability management program. When you know what hardware assets you have, you can validate that you’re scanning all of them for vulnerabilities. As you update your IT inventory, you can include new assets in the scanning cycle and remove assets that no longer need to be scanned. And, when you know what software run on your infrastructure, you can understand which assets are more important. An asset’s importance is key to identifying what should be remediated first.

Most vulnerability scanning platforms allow you to rank the importance of systems being scanned. They prioritize vulnerabilities by applying the CVSS (Common Vulnerability Scoring System) score for each vulnerability on an asset and couple it with the asset’s importance to develop a risk score.

The dimension missing from this risk scoring process is understanding if attackers can reach the asset to compromise it. Although you are remediating vulnerabilities, you can still be vulnerable to attacks if what you’re remediating isn’t accessible by an attacker. It may be protected by firewalls and other network security measures. Knowledge of the network security controls already deployed would allow the vulnerability management program to improve its prioritization efforts to focus on high value assets with exposed vulnerabilities that can be reached from an attacker’s location.

Other vulnerability scanning and risk rating platforms use threat management data to augment their vulnerability risk scoring process. While threat management data (exploits actively in use across the world) adds value, it doesn’t incorporate the network accessibility dimension into evaluating that risk.

As you work to improve your vulnerability management program, it’s best to use all the information available to focus remediation efforts. Beyond CVSS scores, the following elements can improve most programs:

  • Information from network teams on new and removed subnets (IP address spaces) to make sure that all areas of the infrastructure are being scanned.
  • Information from systems teams on which systems are most important to your organization.
  • Including network information in the risk scoring process to determine if these systems are open to compromise.

Although no single product can be the solution for implementing and managing all CIS controls, look for products that provide value in more than one area and integrate with your other security solutions. RedSeal, for example, is a foundational solution that provides significant value for meeting your vulnerability management goals by providing network context to existing vulnerability scanning information. Additionally, RedSeal provides pre-built integrations with many security products and easy integration with others via its REST API interface.

Download the RedSeal CIS Controls Solution Brief to find out more about how RedSeal can help you implement your program using the CIS Controls.

Visibility of IT Assets for Your Cybersecurity Program

By Kes Jecius, RedSeal Senior Consulting Engineer

The Center for Internet Security’s (CIS) first control for implementing a cybersecurity program is to understand and manage the hardware assets that make up your IT infrastructure. These hardware assets consist of network devices, servers, workstations, and other computing platforms. This is a difficult goal to achieve, further complicated by the increasing use of virtualized assets, such as public and/or private cloud, Software as a Service (SaaS), and virtualized servers.

In the past, inventorying these assets was relatively simple. When it came in the door, the physical device was given an inventory tag and entered into an asset management system. The asset management system was controlled by the finance group, primarily so assets could be depreciated for accounting records. As the IT world matured, we saw the advent of virtualized systems where a single box could be partitioned into multiple systems or devices. Further evolution in IT technology brought us cloud-based technologies, where a company no longer has a physical box to inventory. Network services are configured and servers are created dynamically. Hence the daunting task of trying to create and manage the IT inventory of any company.

CIS recognizes this and recommends using both active and passive discovery tools to assist. Since no human can keep up with this inventory of physical and virtual devices, discovery tools can help present an accurate picture of IT assets.

Active discovery tools leverage network infrastructure to identify devices by some form of communication to the device. Network teams are generally opposed to these tools because they introduce extra network traffic. Tools that attempt to “ping” every possible IP address are not efficient. They are also identified as potential security risks, since this is the same behavior that hackers generally use. Newer discovery strategies have evolved that are significantly more network friendly yet do a good job identifying the devices in your IT infrastructure. These newer, active discovery strategies target specific network IP addresses to gather information about a single device. When the information is processed, it can reveal information about other devices in the network.

Passive discovery tools are placed on the network to listen and parse traffic to identify all devices. Passive discovery tools do not add significantly to network traffic, but they need to be placed correctly to capture data. Some computing devices may never be identified because they are infrequently used, or their traffic never passes by a passive discovery tool. Newer passive discovery tools can integrate information with active discovery tools.

Most organizations need a combination of discovery tools. Active discovery tools should minimize their impact to the network and the devices they communicate with. Passive discovery tools can discover unknown devices. IT groups can do a gap analysis between the two tools to assess what is under management and what isn’t (frequently referred to as Shadow IT). This combined approach will provide the best strategy for understanding and managing all assets that make up an IT infrastructure.

Without this first step, having visibility into what these IT assets are and how they are connected, the remaining CIS controls can only be partially effective in maturing your cybersecurity strategy.

Although no single product can be the solution for implementing and managing all CIS controls, look for products that provide value in more than one area and integrate with your other security solutions. RedSeal, for example, is a foundational solution that provides significant value for meeting the first control, while providing benefit to implementing many of the other controls that make up the CIS Control framework. Additionally, RedSeal provides pre-built integrations with many security products and easy integration with others via its REST API interface.

Download the RedSeal CIS Controls Solution Brief to find out more about how RedSeal can help you implement your program using the CIS Controls.

Using the CIS Top 20 Controls to Implement Your Cybersecurity Program

By Kes Jecius, Senior Consulting Engineer

I have the privilege of working with security groups at many different enterprise companies. Each of them is being bombarded by many different vendors who offer security solutions. No surprise, the common estimate is that there are approximately 2,000 vendors offering different products and services to these companies.

Each of these companies struggles with determining how to implement an effective cybersecurity program. This is made more difficult by vendors’ differing views on what is most important. On top of this, companies are dealing with internal and external requirements, such as PCI, SOX, HIPAA and GDPR.

The Center for Internet Security (www.cisecurity.org) offers a potential solution in the form of a framework for implementing an effective cybersecurity program. CIS defines 20 controls that organizations should implement when establishing a cybersecurity program. These controls fall into three categories:

  • Basic – Six basic controls that every organization should address first. Implementation of solutions in these 6 areas forms the foundation of every cybersecurity program.
  • Foundational – Ten additional controls that build upon the foundational elements. Think of these as secondary initiatives once your organization has established a good foundation.
  • Organizational – Four additional controls that are that address organizational processes around your cybersecurity program.

Most organizations have implemented elements from some controls in the form of point security products. But many don’t recognize the importance of implementing the basic controls before moving on to the foundational controls – and their cybersecurity programs suffer. By organizing your efforts using CIS’s framework, you can significantly improve your company’s cyber defenses, while making intelligent decisions on the next area for review and improvement.

Although no single product can be the solution for implementing and managing all CIS controls, look for products that provide value in more than one area and integrate with your other security solutions. RedSeal, for example, is a platform solution that provides significant value in 7 of the 20 control areas and supporting benefit for an additional 10 controls. Additionally, RedSeal has pre-built integrations with many security products and easy integration with others via its REST API interface.

Download the RedSeal CIS Controls Solution Brief to find out more about how RedSeal can help you implement your program using the CIS Controls.

 

The Bleed Goes On

Some people are surprised that Heartbleed is still out there, 3 years on, as you can read here. What this illustrates is two important truths of security, depending on whether you see the glass half full or half empty.

One perspective is that, once again, we know what to do, but failed to do it.  Heartbleed is well understood, and directly patchable.  Why haven’t we eradicated this by now? The problem is that the Internet is big. Calling the Internet an “organization” would be a stretch – it’s larger, more diverse, and harder to control than any one organization.  But if you’ve tried to manage vulnerabilities at any normal organization – even a global scale one – you have a pretty good idea how hard it gets to eliminate any one thing. It’s like Zeno’s Paradox – when you try to eradicate any one problem you choose, you can fix half the instances in a short period of time. The trouble is that it takes about that long again to fix the next half of what remains, and that amount again for the half after that. Once you’ve dealt with the easy stuff – well known machines, with well documented purpose, and a friendly owner in IT – it starts to get hard fast, for an array of reasons from the political to the technical.  You can reduce the prevalence of a problem really quickly, but to eradicate it takes near-infinite time.  And the problem, of course, is that attackers will find whatever you miss – they can use automation to track down every defect.  (That’s how researchers found there is still a lot of Heartbleed out there.)  Any one time you miss might open up access to far more important parts of your organization.  It’s a chilling prospect, and it’s fundamental to the unfair fight in security – attackers only need one way in, defenders need to cover all possible paths.

To flip to the positive perspective, perhaps the remaining Heartbleed instances are not important – that is, it’s possible that we prioritized well as a community, and only left the unimportant instances dangling for all this time.  I know first-hand that major banks and critical infrastructure companies scrambled to stamp out Heartbleed from their critical servers as fast as they could – it was impressive.  So perhaps we fixed the most important gaps first, and left until later any assets that are too hard to reach, or better yet, have no useful access to anything else after they are compromised.  This would be great if it were true.  The question is, how would we know?

The answer is obvious – we’d need to assess each instance, in context, to understand which instances must get fixed, and which can be deferred until later, or perhaps until after we move on to the next fire drill, and the fire drill after that. The security game is a never-ending arms race, and so we always have to be responsive and dynamic as the rules of the game change.  So how would we ever know if the stuff we deferred from last quarter’s crises is more important or less important than this quarter’s?  Only automated prioritization of all your defensive gaps can tell you.