Tag Archive for: Network Security

Exploring the Implications of the New National Cyber Strategy: Insights from Security Experts

In March 2023, the Biden Administration announced the National Cybersecurity Strategy, which takes a more collaborative and proactive approach.

RedSeal teamed up with cyber security experts, Richard Clarke, founder and CEO of Good Harbor Security Risk Management, and Admiral Mark Montgomery (ret.), senior director of the Center of Cyber and Technology Innovation, to discuss the latest strategy. Both have developed previous national cybersecurity strategies so we couldn’t be more privileged to hear their take on the newest national strategy’s impact on cybersecurity regulations. This blog covers the importance of harmonizing the rules, trends in resilience planning, the role of cyber insurance, the transfer of liability, and the need to keep pace with AI and quantum computing. Keep reading to learn more, or click here to listen in.

Expanding Cybersecurity Regulations

Although this is the first time the administration gives a clear and intentional nod to cybersecurity regulations, the federal government has regulated every other major sector for over 20 years. This step makes sense. Clarke points out, sectors with heavy cyber regulations have fared better in the past two decades than those without. Montgomery predicts that most changes will happen in areas where regulations are lagging, such as water, oil pipelines, and railroads.

But many agencies don’t have the resources for effective enforcement. The government must thus use a combination ofregulations, incentives, and collaboration to achieve meaningful outcomes.

The Importance of Harmonizing the Rules

The new strategy aims to “expand the use of minimum cybersecurity requirements in critical sectors to ensure national security and public safety and harmonize regulations to reduce the burden of compliance.” But the expansion of cybersecurity regulations must come hand in hand with better coordination.

Clarke observes, today’s regulations aren’t well-coordinated. Agencies must share lessons learned and align their approaches. Private sectors will benefit from the standardization of various regulations to streamline compliance, reducing cybersecurity complexity and lowering costs.

However, coordination and standardization doesn’t mean a one-size-fits-all solution. Agencies must tailor their regulations to each specific sector. The good news is that we can apply the same network security technologies to any industry and encourage knowledge-sharing across verticals. For instance, we can take the high standards from the defense industry and apply them to healthcare and transportation without reinventing the wheel.

A Focus on Resilience Planning

The cybersecurity definition of resilience has evolved as the world has become more digital. We will get hacked. It is a certainty. Instead of only looking to protect systems from attacks, regulatory mandates must also focus on prompt recovery. The government should also hire industry experts to assess digital resilience plans and stress-test them for reliance.

Cyber resilience must be applied to national security as well as private business. Transportation infrastructure must be able to operate without extended interruption. The economy (e.g., the power grid and financial systems) is our greatest weapon, and must keep functioning during conflicts and crises. Lastly, we must have the tools to quickly and effectively battle disinformation, a new frontier in the fight against nation-state threats.

The Impact of the Internet of Things (IoT)

Regulations must also cover IoT devices, but focus on the networks instead of the thousands of individual endpoints. Clark suggests that organizations should install sensors on their networks and conduct regular vulnerability scans. Montgomery adds to this, emphasizing the need for certification and labeling regimens as part of a long-term plan to make vendors responsible for their products’ performance and security.

Shifting Liability to Vendors

Speaking of making vendors responsible for their products’ performance and security, the new strategy intends to transfer liability to software vendors to promote secure development practices, shift the consequences of poor cybersecurity away from the most vulnerable, and make our digital ecosystem more trustworthy overall.

Clarke agrees that this approach is necessary, but holds that the current regulatory framework can’t support the legal implementation. IT lobbyists, some of the most well-funded and influential players on Capitol Hill, will make enforcement of such a shift an uphill battle. Clarke believes that, unfortunately, this hard but necessary shift may not happen until a tragedy shakes the nation and leaves it the only way forward.

Keeping Pace with AI and Quantum Computing

We, as a nation, have many issues to consider around AI, including beyond security. Clarke points out that we must establish rules about transparency: what’s the decision-making process? How did AI get to a conclusion? Is it searching an erroneous database? Is the outcome biased? Large language models (LLMs) are constantly learning, and adversaries can poison them to impact our decision-making.

While AI is the big problem of the moment, we can’t afford to continue ignoring quantum encryption challenges, cautions Montgomery. We have already fallen behind and must spend a substantial sum today to prepare for what’s in store in 10 years. We must start building quantum security into our systems instead of attempting to jury-rig something on later, adds Clarke.

The Rise of Cyber Insurance and Real-time Monitoring

Montgomery predicts that, if run properly, the cyber insurance market can bring these pieces together. Insurance companies may, for instance, encourage proactive measures by reducing premiums for organizations that invest in cybersecurity upfront and establish a track record of reliability and resiliency.

But organizations must prove they’re continuously protected instead of merely showing “point in time” compliance to take advantage of lower premiums. Real-time monitoring will play a critical role in lowering premiums and maintaining cybersecurity.

A Step in the Right Direction

The new National Cyber Strategy introduces timely and much-needed shifts. We must harmonize regulations to maximize the benefits without overburdening the private and public sectors.

In anticipation of the impending changes, organizations must approach their cybersecurity strategies proactively and implement the right tools and services to stay compliant. These include a comprehensive network security solution for complete visibility and ongoing monitoring, cloud security tools to protect all IT assets, and professional services to ensure airtight implementation and continuous compliance.

RedSeal has extensive expertise and experience in delivering government cybersecurity and compliance solutions. Get in touch to see how we can help you stay ahead in today’s fast-evolving digital environment.

Accidental Cloud Exposure – A Real Challenge

The recent disclosure that Toyota left customer data accidentally exposed for a decade is pretty startling, but can serve as a wake up call about how cloud problems can hide in plain sight.

It’s not news that humans make mistakes – security has always been bedeviled by users and the often foolish choices that they make. Administrators are human too, of course, and so mistakes creep in to our networks and applications. This too is a perennial problem. What’s different in the cloud is the way such problems are hard to see, and easy to live with until something bad happens. Cloud isn’t just “someone else’s computer”, as the old joke goes – it’s also all virtual infrastructure. If you’ve never seen how cloud infrastructure is really built and managed, you may not realize how inscrutable it all is – think of it like a computer in an old movie from the 1970’s, all blinking lights and switches on the outside, but no way to see what is really happening inside. These days, we are used to visual computers and colorful phones, where we can see what we are doing. Cloud infrastructure is not like that – or at least, is not if you just use the standard management interfaces that are frustrating, opaque, and vendor specific. Are there ways you can escape the lock-in to your specific cloud vendor? Sure – inventions like Kubernetes free you up, but the price is even worse visibility as you drive everything through shell scripts, CLI commands, and terminals. The 1970’s computer has moved up to the 1980’s green screen, but it’s a far cry from anything visual.

I don’t mean to just pick nits with the old-world interfaces of cloud – this isn’t a debate about style, it’s a problem with real world consequences, especially for security. You can’t see through a storm cloud in the sky, and similarly, you really can’t see what’s going on inside most cloud applications today, let alone ensure that everything is configured correctly. Sure, there are compliance checkers that can see how individual settings are configured, but trusting these is like saying a piece of music is enjoyable because every note was tuned exactly – that rather misses the big picture of what makes music good, or what makes a cloud application secure.

This is why you need to be able to separate security checking from the CI/CD pipelines used to set up and run cloud infrastructure. The much-hyped idea of DevSecOps has proven to be a myth – embedding security into DevOps teams is no more successful than embedding journalists with platoons of soldiers. The two tribes don’t see the world the same way, don’t have the same objectives, and largely just frustrate each other’s goals.

Central security has to be able to build the big picture, and needs to check the ultimate result of what the organization has set up. Ideas like “shift left” are good, but do not cover the whole picture, as the Toyota exposure makes clear. Every detail of the apps was working, and was quite likely passing all kinds of rigorous low level checks. But just like checking whether each note is tuned correctly, while not listening to the piece as a whole, Toyota lost track of the big picture, with all the embarrassment that goes with admitting a ten year pattern of unintended exposure.

Solving this is the motivating mission at RedSeal. We know what it takes to build a big picture view, and then assess exposure at a higher level, rather than getting stuck in implementation details. It’s the only way to make sure the song plays well, or the application is built out sensibly. This is why we build everything starting from a map – you can’t secure what you can’t see. This map is complete, end-to-end, covering what you have in the cloud and what you keep on your premises. We can then visually overlay exposure, so you get an immediate, clear picture of whether you have left open access to things that surprise you. We can give you detailed, hop-by-hop explanation of how that exposure works, so that even people who are not cloud gurus can understand what has been left open. We can then prioritize vulnerabilities based on this exposure, and on lateral movement. And finally, we can boil it all up into a score that senior management can appreciate and track, without getting lost in the details. As Toyota found to their cost, there are an awful lot of details, and it’s all too easy to lose the big picture.

What Is Cloud-Native Application Protection Platform (CNAPP), An Extension of CSPM

Modern businesses are increasingly storing data in the cloud and for a good reason — to increase agility and cut costs.

But as more data and applications migrate to the cloud, the risk of data and systems being exposed increases. Conventional methods for addressing security aren’t equipped to manage containers and server-less environments. Therefore, gaps, silos, and overall security complexity increase.

This is where Cloud-Native Application Protection Platform (CNAPP), an extension of Cloud Security Posture Management (CSPM), excels. This new cloud platform combines the features of CSPM, Cloud Infrastructure Entitlement Management (CIEM), Cloud Workload Protection Platforms (CWPPs), CI/CD security, and other capabilities into a unified, end-to-end encrypted solution to secure cloud-native applications across the full application lifecycle.

Where CNAPP/CSPM Vendors Fall Short

It’s important to point out that many CNAPP vendors focus on providing security measures, such as CIS compliance checks or a basic “connectivity” view and segmentation to protect an organization’s applications and infrastructure in the cloud. These measures help prevent malicious actors from gaining unauthorized access to an organization’s resources, but they don’t necessarily provide visibility into potential exposures that may exist in an application’s design or configuration, thus providing a false sense of security.

Most vendors can correlate resources to compliance or identity violations, but the network context of these solutions is often limited, leading to a lack of visibility into the hidden attack surface. This results in insights that are often irrelevant and unactionable, causing security teams to chase false positives or negatives and reducing their overall effectiveness. Additionally, the shortcomings of these solutions can cause DevOps teams to lose trust in the security measures in place, hindering their confidence in the infrastructure.

The most critical gap is CNAPP vendors lack the ability to calculate net effective reachability, which determines the network’s overall connectivity, including identification of potential points of failure or bottlenecks. In simple terms, they cannot accurately determine if their critical resources are exposed to the Internet. Without this information, security teams will be unable to identify the main cause of a problem or effectively prioritize potential threats. The result is inefficiencies and delays in the security response process, leaving the company vulnerable to attacks and flag false positives/negatives to the DevOps teams.

To identify exposures, organizations need to conduct assessments that look for end-to-end access from the internet that drive up risks to the organization from malicious activities such as insufficient authentication or authorization, unvalidated input/output, SQL injection, cross-site scripting (XSS), insecure file uploads, and more.

What Is CNAPP?

CSPM is an automated set of security tools designed to identify security issues and compliance risks in cloud infrastructure.

CNAPPs consolidate the capabilities and functionalities offered by CSPM and CWPPs, providing centralized access to workload and configuration security capabilities. They help teams build, deploy, and run secure cloud-based apps in today’s heavily dynamic public cloud environments.

A CNAPP solution comes with a single control panel with extensive security features such as automation, security orchestration, identity entitlement management, and API identification and protection. In most cases, these capabilities are used to secure Kubernetes workloads.

How Does CNAPP Work?

CNAPP uses a set of technologies, such as runtime protection, network segmentation, and behavioral analytics, to secure cloud-native applications and services. CNAPP provides a holistic view of the security of cloud applications by monitoring and implementing security protocols across the entire cloud application profile.

CNAPP works by identifying the different components that exist in a cloud-native application, such as containers and microservices, and then applying security controls to every component. To do this, it uses runtime protection to monitor the behavior of the application and its components in real time. It leverages methods such as instrumentation to identify vulnerabilities in the application.

Also, CNAPP uses network segmentation to separate different parts of the application and reduce communication between them, thus reducing the attack surface. In addition, CNAPP includes features such as incident response and compliance management to help businesses respond quickly to security incidents, as well as ensure that apps and services comply with industry standards and regulations.

Why Is CNAPP Important?

Cloud-native application environments are quite complex. Teams have to deal with app workloads that continuously move between the cloud, both private and public, with the help of various open-source and custom-developed code. These codes keep on changing as release cycles increase, with more features being rolled into production and old code is replaced with new.

To deal with the challenges of ensuring the security of highly dynamic environments, IT teams often have to put together multiple types of cloud security tools. The problem is that these tools offer a siloed, limited view of the app risk, increasing the company’s exposure to threats. DevSecOps teams often find themselves having a hard time manually interpreting information from multiple, disjointed solutions and responding quickly to them.

CNAPPs help address these challenges by combining the capabilities of different security tools into one platform to provide end-to-end cloud-native protection, allowing security teams to take a holistic approach to mitigate risk and maintain security and compliance posture.

CNAPP with RedSeal

The challenge most enterprises face is that they cannot get clear visibility of their entire network. Most networks are hybrid, with both public and private cloud environments, along with a physical network framework. This provides siloed visibility, which raises security risks.

When CSPM, CWPPs, CIEM, and CI/CD security work together, companies can quickly get a glimpse of what is happening on their network, allowing IT teams to take immediate action.

RedSeal Cloud, a CNAPP solution, provides organizations with a view of their entire cloud framework to identify where key resources are located and a complete analysis of the system to identify where it’s exposed to attacks. RedSeal maps every path and checkpoint, and calculates the net effective reachability of all aspects of your cloud, enabling you to quickly pinpoint areas that require immediate action. Furthermore, it avoids false positives and negatives, and supports complex deployments with different cloud gateway and third-party firewall vendors.

The Right CNAPP Tool for Reliable Cloud Security Management

Ensuring the security of assets in the cloud has never been more important.

Companies can leverage CNAPP capabilities to secure and protect cloud-based applications, from deployment to integration, including regular maintenance and eventual end-of-life. That said, CNAPP solutions are not one-size-fits-all options but rather a combination of different vendor specialties under a single platform, proving single-pane-of-glass visibility to users.

Companies wanting to adopt CNAPPs should focus on how vendors interpret the underlying cloud networking infrastructure, the per-hop policies at every security policy point, including third-party devices, to identify any unintended exposure, and how the solution interacts with other services, both on-premises and in the cloud.

In summary, every company should ask potential CNAPP vendors:

  • How do they uncover all attack paths to their critical resources and expose the hidden attack surface?
  • How do they calculate the net effective reachability to the critical resources on those paths?

RedSeal’s CNAPP solution, RedSeal Cloud, lets security teams know if critical cloud resources are exposed to risks, get a complete visualization of their cloud infrastructure, and obtain detailed reports about CIS compliance violations.

Want to know how you can stop unexpected exposure and bring all your cloud infrastructure into a single comprehensive visualization? Book a demo with our team to get started!

Tales from the Trenches: Vol 10 — You Don’t Know What You Don’t Know

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

In this edition of the series Michael Wilson, Senior Network Security Engineer, explains how RedSeal empowers customers to verify their contractors are following security best practices and have their organization’s best interest in mind.

You Don’t Know What You Don’t Know

In my customer’s environment, the network is segmented and managed by both the customer and several contracted partners. It is a difficult task to have visibility into an entire network that is distributed across several different contracted partners, let alone keep track of all of the devices and changes that can occur across a network. The adage of ‘you don’t know what you don’t know’ is very relevant in a situation like this. RedSeal has the ability to provide my customer with a single pane of glass to see all these network segments that are managed by different contracted partners.

The customer’s RedSeal deployment runs daily collection tasks, and the customer can see any changes that occur to their network from day to day. One morning, I logged into RedSeal and started my daily maintenance tasks, which includes ensuring that data collections ran correctly, and analysis was performed successfully, and I noticed that there was an increase in device count. This was a cause for investigation, as new devices being brought into RedSeal without any new data collection tasks is a possible indicator of compromise.

I notified the customer, and I started to investigate. I noticed that these changes occurred in the customer’s SDWAN environments. This SDWAN environment uses clusters to manage edge devices, and the customer has devices spread around in many different locations. The environment is managed by one of the customer’s contracted organizations and, previously, the environment used 4 clusters to serve all the customer’s edge devices in this SDWAN environment. The additional devices that RedSeal discovered were an additional 20 clusters that upped the total from 4 to 24. Once I started to arrange the new clusters on the map, I started to see that these new clusters were connected in such a way that they were serving specific geographic regions of the customer’s environment. This indicated the contracted partner was making significant changes to the SDWAN environment and the new devices were likely not an indicator of compromise.

Once I determined that this was likely a planned network change, I asked the customer if they were aware that these changes were planned and being implemented to the network. They were not aware of any plans and changes being implemented. I asked the customer to immediately verify that the changes were planned, and the customer discovered that not only were these changes planned, but they had never been notified of these planned changes. This demonstrated a significant lack of communication between the customer and their contracted partners. I was able to use RedSeal not only to discover network changes that occurred on the network, but a fundamental operational flaw of the entire customer’s workflow surrounding network changes. It gave the customer the ability ‘to know what they didn’t know’.

The risks that the customer was unknowingly accepting (and by default, unable to mitigate or remove) through this lack of communication was that the contracted partner was making changes to the customer’s network, which contains devices that have Payment Card Industry (PCI) data running through them. By making changes without consulting the customer, the contracted partner was potentially exposing the customer to a disastrous breach of customer financial information. The reason this could be the case is that the contracted partner does not control the entire customer network and changes in their network segment may unknowingly lead to security holes in other parts of the network that is managed by either the customer directly or another contracted partner. To top it off, the customer would have had no idea of this risk because they were unaware of what was happening on their network. RedSeal was able to become the stop gap and identify that risk and provide the information needed to make an informed and educated decision on what risks to accept, mitigate, or remove.

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.

Top Reasons State and Local Governments Are Targeted in Cyberattacks

Ransomware attacks affected at least 948 U.S. government entities in 2019 and cost local and state governments over $18 billion in 2020. These agencies are prime targets for cyberattacks. Their dispersed nature, the complexity of their networks, the vast amounts of valuable personal data they process and store, and their limited budget prevent them from staying current with the latest best practices.

Strengthening your defense starts with understanding the top reasons why threat actors choose to target state and local governments. Then, implement the latest technologies and best practices to protect your organization from attacks.

Reason 1: The Vast Number of Local and State Government Agencies

There are 89,004 local governments in the U.S., plus numerous special districts and school districts. That equates to 2.85 million civilian federal employees and 18.83 million state and local government employees — each representing a potential target for threat actors.

Since it takes only one person to click on one malicious link or attachment to infect the entire system with ransomware, the large number of people who have access to sensitive data makes government entities prime targets for social engineering attacks.

Moreover, the dispersed nature of these networks makes it extremely challenging for government agencies to gain visibility of all the data and activities. When one agency suffers an attack, there are no procedures or methods to alert others, coordinate incident response plans, or prevent the same attack from happening to other entities.

Reason 2: These Agencies Process Valuable Personal Information

How much personal data have you shared with state and local government agencies? Somewhere in their dispersed systems reside your social security number, home addresses, phone numbers, driver’s license information, health records, etc. The information is attractive to cybercriminals because they can sell it on the dark web or use it for identity theft.

Many of these agencies also hire contractors and sub-contractors to handle their computer systems or process user data. The more people with access to the data, the larger the attack surface — creating more opportunities for supply chain attacks where criminals target less secure vendors to infiltrate their systems.

Without the know-how or resources to partition their data or implement access control, many government agencies leave their door wide open for criminals to access their entire database. All malicious actors have to do is target one of the many people who can access any part of their systems.

Reason 3: They Can’t Afford Security Experts and Advanced Tools

Almost 50 percent of local governments say their IT policies and procedures don’t align with industry best practices. One major hurdle is that they don’t have the budget to offer wages that can compete with the private sector and a workplace culture to attract and retain qualified IT and cybersecurity professionals.

Meanwhile, cybercriminals are evolving their attack methods at breakneck speed. Organizations must adopt cutting-edge cybersecurity software to monitor their systems and detect intrusions. Unfortunately, the cost of these advanced tools is out of reach for many government entities due to their limited budgets.

Moreover, political considerations and bureaucracy further hamstring these organizations. The slow speed of many governmental and funding approval processes makes preparing for and responding to fast-changing cybersecurity threats even more challenging.

Reason 4: IoT Adoption Complicates the Picture

From smart building technology and digital signage to trash collection and snow removal, Internet of Things (IoT) tools, mobile devices, and smart technologies play an increasingly vital role in the day-to-day operations of local governments.

While these technologies help promote cost-efficiency and sustainability, they also increase the attack surface and give hackers more opportunities to breach a local government’s systems and networks —  if it fails to implement the appropriate security measures.

Unfortunately, many agencies jump into buying new technologies without implementing proper security protocols. Not all agencies require IoT devices to perform their functions. You should therefore balance the cost and benefits, along with the security implications, to make the right decisions.

How Government Agencies Can Protect Themselves Against Cyberattacks

An ounce of prevention is worth a pound of cure. The most cost-effective way to avoid the high costs of ransomware attacks and data breaches is to follow the latest cybersecurity best practices. Here’s what state and local governments should implement to stay safe:

  • Complete visibility into your entire IT infrastructure to provide a comprehensive view into all the possible hybrid network access points to understand what’s connected to your network and what data and files are most at risk. This way, you can prioritize your data security resources.
  • Intrusion detection and prevention systems (IDS and IPS) protect your wired and wireless networks by identifying and mitigating threats (e.g., malware, spyware, viruses, worms), suspicious activities, and policy violations.
  • A mobile device management (MDM) solution allows administrators to monitor and configure the security settings of all devices connected to your network. Admins can also manage the network from a centralized location to support remote working and the use of mobile and IoT devices.
  • Access control protocols support a zero-trust policy to ensure that only compliant devices and approved personnel can access network assets through consistent authentication and authorization, such as multi-factor authentication (MFA) and digital certificates.
  • Strong spam filters and email security solutions protect end users from phishing messages and authenticate all inbound emails to fence off social engineering scams.
  • Cybersecurity awareness training for all employees and contractors helps build a security-first culture and makes cybersecurity a shared responsibility, which is particularly critical for fending off social engineering and phishing attacks.
  • A backup and disaster recovery plan protects agencies against data loss and ransomware attacks by ensuring operations don’t grind to a halt even if you suffer an attack.

Final Thoughts: Managing the Many Moving Parts of Cybersecurity

Cybersecurity is an ongoing endeavor, and it starts with building a solid foundation and knowing what and who is in your systems.

You must map your networks, take inventory of every device, and know where all your data is (including the cloud) to gain a bird’s-eye view of what your security strategy must address. Next, assess your security posture, evaluate your network against your policies, and prioritize resources to address the highest-risk vulnerabilities. Also, you must continuously monitor network activities and potential attack paths to achieve constant visibility, prioritize your efforts, and meet compliance standards.

State and local governments worldwide trust RedSeal to help them build digital resilience. Request a demo to see how we can help you gain visibility of all network environments to jumpstart your cybersecurity journey.

Tales from the Trenches: Vol 9 — The Law of Unintended Consequences, OR Some Doors Swing Both Ways

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

In this edition of the series Bill Burge, RedSeal Professional Services, explains how RedSeal can show you ALL the access from a network change, not just the one access you are expecting.

The Law of Unintended Consequences, OR Some Doors Swing Both Ways

“The law of unintended consequences” states that the more complex the system, the greater the chance that there is no such thing as a small change.

While working with a customer in the early days of my RedSeal Professional Services tenure, I looked for an opportunity to prove the capability of Zones & Policies. In an unfamiliar environment, the easy starting point is creating a policy that examines the access from “Internet to all internal subnets.”

It is easy to setup and easy to discuss the results, UNLESS the results say that most of the Internet can get to most of the internal network.

I thought “I MUST have done something wrong!” I got the impression that the customer felt the same thing, even though neither of us came right out and said it. So, I tore into it.

Using some ad hoc access queries and Detailed Path queries, we figured out the problem and why.

After looking into it, thinking something was amiss, it turned out that RedSeal was RIGHT. It seems there had been a pair of firewall rules for DNS requests:
SRC: inside, SRC PORT: any, DST: outside, DST PORT: 53, PROTOCOL: UDP
(and for the responses)
SRC: outside, SRC PORT: 53, DST: inside, DST PORT: any, PROTOCOL: UDP

At some point, because DNS resolutions got large enough that the responses did not fit in a single UDP packet, DNS needed to include TCP. So, someone simply made a small change and added TCP to each of these rules.

The unintended consequence was that you could reach just about any internal system from the Internet IF you initiated your request from port 53.

After this was verified by the firewall and networking teams, I might have well gone home. Everybody disappeared into meetings to discuss how to fix it, whether it could be done immediately or later that night, etc.

A little time later, I ALMOST felt guilty to point out that they had done pretty much the same thing with NTP, on port 123. (Almost…)

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.

Tales from the Trenches: Vol 8 — Is that what you are going to say to the Auditor?

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

In this edition of the series Brad Schwab, Senior Security Solutions Consultant addresses a tricky network scanning question and how to verify with RedSeal.

Is that what you are going to say to the Auditor?

One of the biggest elephant in the room questions for Security Operations groups that deal with Vulnerability Scanners is very simple to state, but very, very tricky to answer, “are you sure you are scanning the entire network?” Sounds like it should be a simple yes or no answer. However, with any network of scale, the answer can be almost impossible to verify.

I was in a high level meeting for a large Health Organization with the CTO, head of Network Operations (NetOps), the head of Security Operations (SecOps), along with other people that had different stakes in the performance and security of the network. Since the network was the main instrument supporting the “Money Engine” of the operation, all attendees were laser focused on answers to any questions.

At a certain point in the meeting Wendy, the head of SecOps was talking about the scanning program. More specifically, she was speaking about procedures created to scan the entire network. The entire network!? So, at this point, I had to ask the question, “how do you know you are scanning the entire network?” She pointed to Bill, the head of NetOps and said “Bill said I could…”. That is where I looked at Bill, and said “is that what you are going to put on the audit, “Bill said I could?” Now, Bill and I had a good working relationship, and he knew that I was having a bit of fun at his expense, however, others in the room weren’t going to gloss over the subject, and began to pepper both Bill and I with questions. I proceeded to line out where the difficulties were in answering, with the following questions:

  • Does the scanner have a complete list of all IP space on the network that needs scanned?
  • Are there any overlapping subnets? If so, that overlapped portion of a subnet is not visible to the scanner. Thus, creating a possible hiding place for a bad actor.
  • Is there any duplicate IP space in the network? – again creating blind spots to any scanner.
  • And finally, the hard part, does the scanner have logical access to the entire network? Even if the scanner is trying to scan a network subnet, if the network architecture via Access Control Lists and Routing is blocking the access or not granting the access, then the scan won’t be complete. On top of that, you will get no indication from the scanner that the scan didn’t work. Beyond the logical access issue, no one had thought of the other issues. I then explained how RedSeal automatically looks for subnets that have no scan data, thus possibly not part of the IP list giving to the scanner, overlapping subnets and duplicate IP space. At the same time, I explained how a RedSeal Access Query combined with our “show what is missing” feature can give you a list of everything that the scanner can’t reach because of network architecture.

I ended my explanation with “with these features, you can have comprehensive documentation of complete scanner coverage for your upcoming audit(s)…”

After less than a few days of work, we had provided a list to both NetOps and SecOps of additions and changes required by both teams to make their Vulnerability Program complete.

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.

Why Visualizing the Entire Healthcare Attack Surface Is Critical

In recent years, the healthcare sector has been steadily adopting web and cloud-based technologies and shifting towards an internet-enabled system to improve quality of care.

However, along with the limitless benefits that the internet offers — like sharing information, simplifying operational processes, tracking workflows, enhancing connectivity, and storing and organizing data — is an increased risk of cyberattacks, data breaches, and other types of fraud. This makes hospitals and healthcare organizations increasingly vulnerable to advanced threats and targeted attacks.

According to recent reports, data breaches in the healthcare sector have been rising at an alarming rate for the last five years. In 2020, during the COVID-19 pandemic, email-based attacks increased by 42%, so it’s no wonder that more and more healthcare organizations are adopting a robust, multi-faceted strategy to improve their security posture. Hospitals’ expanding digital footprint also complicates their network infrastructures, making complete visibility into the entire attack surface extremely essential to managing cyber risks effectively.

Expanding Healthcare Attack Surface Risks

The widespread use of wireless technology is undoubtedly beneficial to the healthcare system. Wireless technology enables healthcare IT infrastructures to run data center servers, medical equipment, tools and applications, and other devices like smartphones, tablets, and USB drives. Organizations stay connected to deliver effective operations and consistently informed care.

These connected devices help in patient monitoring, medication management, workflow administration, and other healthcare needs. However, the increased number of devices connecting to the network also broadens the attack surface — meaning more entry points for unauthorized access and therefore the need for enhanced infrastructure visibility to mitigate risks.

Why Complete Visualization Is Essential

From booking an appointment to setting foot in the doctor’s clinic or hospital, patients go through several processes and interact with different interconnected devices and software systems. While a connected environment ensures a seamless patient experience, the different touch points provide more opportunities for attackers to gain access to sensitive data.

Currently, there are 430 million linked medical devices deployed globally, connected through Wi-Fi, Bluetooth, and radio transmission. The sheer amount of sensitive and personal information healthcare systems capture and process is why their systems are desirable targets. Therefore, it is critical to safeguard the data stored in these systems.

Protected health information (PHI), such as credit card and bank account numbers, and personal identification information (PII), such as social security numbers, are data cybercriminals find particularly alluring. Selling this sensitive information on the dark web is a very profitable business.

Even just a small part of the healthcare technology spectrum may lead to the greatest cybersecurity gaps, allowing criminals to exploit vulnerabilities and gain access to sensitive data. The resulting cyber crimes directly impact organizational productivity and brand reputation.

Here are a few risks that are most detrimental to healthcare businesses’ bottom lines and reputations.

  • Ransomware: Healthcare services are notably vulnerable to ransomware attacks because they depend on technology to a significant extent, considering the nature of their day-to-day operations. Health records are highly rewarding for criminals because each patient, hospital, or confidential record can command a hefty price in the underground market.
  • Phishing: Phishing attacks are quite common in healthcare. Attackers target the most vulnerable link in the security chain, i.e., people, to make their jobs easier. Through social engineering, users click on malicious attachments or links, thereby infecting their systems and losing access. The repercussions can be disastrous and the losses unimaginable. For instance, a Georgia diagnostics laboratory recently discovered that an employee’s compromised email account led to a phishing attack, impacting 244,850 individuals. The attackers were able to acquire patient information and then attempted to divert invoice payments.
  • Cloud Storage Threats: Many healthcare providers are now switching to cloud-based storage solutions for better connectivity and convenience. Unfortunately, not every cloud-based solution is HIPAA-compliant, making them clear targets for intruders. Healthcare companies must implement access restrictions more carefully and encrypt data properly before transmitting. Additionally, complete visualization of the attack surface is necessary to prevent data breaches, data leaks, improper access management, and cloud storage misconfiguration.

How to Protect Expanding Healthcare Attack Surfaces

Attack surface analysis can help identify high-risk areas, offering an in-depth view of the entire system. This way, you can better recognize the parts that are more vulnerable to cyber threats and then review, test, and modify the security strategies in place as necessary.

Healthcare IT administrators must secure the network infrastructure using stringent policies and procedures like enforcing strong passwords, properly configuring firewalls, setting up user access permissions, and ensuring authorized access to assets and resources. They must also monitor and properly configure all the devices connected to the network — be it standard healthcare devices or personal devices of patients and workers. In addition, a strong encryption policy can help increase data security, making it difficult for cyber attackers to penetrate the system.

Conducting regular attack surface scans can also mitigate cyberattack risks. This helps ensure security control measures are adequate and that decision-makers have the data they need to make informed decisions regarding the organization’s cybersecurity strategy. Also, all types of software and related updates for medical devices must be tested prior to installation.

Secure Your Entire Healthcare Network with RedSeal

Healthcare organizations often hesitate to invest in cloud security solutions. But the average cost of a healthcare breach is $9.23 million, which is far more than the cost of professional cloud security solutions. Additionally, healthcare institutions deal with extremely sensitive information, and fines for data security noncompliance can be extremely costly. Healthcare security leaders must be able to effectively visualize their entire attack surface to bolster their cybersecurity defenses.

RedSeal offers award-winning cloud security solutions that provide comprehensive, dynamic visualization of all connected devices. We partner with leading network infrastructure suppliers to provide comprehensive network solutions and professional services. This way, you can see and secure your entire network environment.

Contact us to learn how we can help strengthen your network security.

Tales from the Trenches: Vol 7 — You Can’t Always Get What You Want

Since 2004, RedSeal has helped our customers See and Secure their entire complex network. And while those customers may have understood the value of understanding their environment, how it was connected and see what’s at risk, there is often an “Aha” moment when the true significance is clear. The stories of these moments are lore within the walls of RedSeal. But these tales so clearly illustrate the value of RedSeal beyond just theory that we think they’re worth sharing. In the words of our team in the field, the ones working directly with our customers, this blog series will share the moments where it all gets real.

In this edition of the series, Bill Burge, RedSeal Professional Services places customer questions in full network context and reveals an even better solution with RedSeal.

You Can’t Always Get What You Want

While working with a large customer with multiple, interconnected, environments, their greatest fear was that infection in one environment might cross over one environment into the others.

They had purchased a managed service, which meant I was the primary RedSeal Admin. They approached me with a request and it was obvious they were having a possible “incident”. It was obvious they didn’t want to provide TOO many details, but I’ve spent enough time on both sides of these topics that I was pretty sure what I was up against.

Their request was simple to say, but that doesn’t mean it was simple to perform. “Can you give us a report of all the firewall rules that control this particular subnet?” For RedSeal, I can perform some queries that will do a pretty poor job of that when you factor in the multiple ways to cover a block of addresses in a firewall policy, groups, large masks, even the use of “any”. All these would have to be detected, expanded, broken out and apart, etc. It’s largely a fool’s errand.

So I politely declined. I gave a brief explanation of the dynamics and the fact that firewall policies would also have to be weighed against, and in conjunction with, router ACLs, and even routing. I always say “the firewall rules are only the verb in the sentence of access”. I offered an alternative: “Tell me the IP address that has been compromised, and I’ll tell you all the subnets it might have accessed, and all the vulnerabilities it might have exploited in the process.”

The customer’s response was: “You can do THAT? THAT’S even better! Let’s do it!”

I explained that calculating access is the foundation of RedSeal. As Mick Jagger says “you can’t always get what you want, but you just might find — you get what you need”.

Interested in how RedSeal can help your team? Click here to set up a demo or an introductory call.

Purdue 2.0: Exploring a New Model for IT/OT Management

Developed in 1992 by Theodore J. Williams and the Purdue University Consortium, the Purdue diagram — itself a part of the Purdue Enterprise Reference Architecture (PERA) — was one of the first models used to map data flows in computer-integrated manufacturing (CIM).

By defining six layers that contain both information technology (IT) and operational (OT) technology, along with a demilitarized zone (DMZ) separating them, the Purdue diagram made it easier for companies to understand the relationship between IT and OT technologies and establish effective access controls to limit total risk.

As OT technologies have evolved to include network-enabled functions and outward-facing connections, however, it’s time for companies to prioritize a Purdue update that puts security front and center.

The Problem with Purdue 1.0

A recent Forbes piece put it simply: “The Purdue model is dead. Long live, Purdue.”

This paradox is plausible, thanks to the ongoing applicability of Purdue models. Even if they don’t quite match the reality of IT and OT deployments, they provide a reliable point of reference for both IT and OT teams.

The problem with Purdue 1.0 stems from its approach to OT as devices that have MAC addresses but no IP addresses. Consider programmable logic controllers (PLCs). These PLCs typically appear on MAC addresses in Layer 2 of a Purdue diagram. This need for comprehensive visibility across OT and IT networks, however, has led to increased IP address assignment across PLCs, in turn making them network endpoints rather than discrete devices.

There’s also an ongoing disconnect between IT and OT approaches. Where IT teams have spent years looking for ways to bolster both internal and external network security, traditional OT engineers often see security as an IT-only problem. The result is IP address assignment to devices but no follow-up on who can access the devices and for what purpose. In practice, this limits OT infrastructure visibility while creating increased risk and security concerns, especially as companies are transitioning more OT management and monitoring to the cloud.

Adopting a New Approach to Purdue

As noted above, the Purdue diagram isn’t dead, but it does need an update. Standards such as ISA/IEC 62443 offer a solid starting point for computer-integrated manufacturing frameworks, with a risk-based approach that assumes any device can pose a critical security risk and that all classes of devices across all levels must be both monitored and protected. Finally, it takes the position that communication between devices and across layers is necessary for companies to ensure CIM performance.

This requires a new approach to the Purdue model that removes the distinction between IT and OT devices. Instead of viewing these devices as separate entities on a larger network, companies need to recognize that the addition of IP addresses in Layer 2 and even Layer 1 devices creates a situation where all devices are equally capable of creating network compromise or operational disruption.

In practice, the first step of Purdue 2.0 is complete network mapping and inventory. This means discovering all devices across all layers, whether they have a MAC address, IP address, or both. This is especially critical for OT devices because, unlike their IT counterparts, they rarely change. In some companies, ICS and SCADA systems have been in place for 5, 10, even 20 years or more, while IT devices are regularly replaced. As a result, once OT inventory is completed, minimal change is necessary. Without this inventory, however, businesses are flying blind.

Inventory assessment also offers the benefit of in-depth metric monitoring and management. By understanding how OT devices are performing and how this integrates into IT efforts, companies can streamline current processes to improve overall efficiency.

Purdue Diagram

 

Controlling for Potential Compromise

The core concept of evolving IT/OT systems is interconnectivity. Gone are the days of Level 1 and  2 devices capable only of internal interactions, while those on Levels 3, 4, and 5 connect with networks at large. Bolstered by the adoption of the Industrial Internet of Things (IIoT), continuous connectivity is par for the course.

The challenge? More devices create an expanding attack surface. If attackers can compromise databases or applications, they may be able to move vertically down network levels to attack connected OT devices. Even more worrisome is the fact that since these OT devices have historically been one step removed from internet-facing networks, businesses may not have the tools, technology, or manpower necessary to detect potential vulnerabilities that could pave the way for attacks.

It’s worth noting that these OT vulnerabilities aren’t new — they’ve always existed but were often ignored under the pretense of isolation. Given the lack of outside-facing network access, they often posed minimal risk, but as IIoT becomes standard practice, these vulnerabilities pose very real threats.

And these threats can have far-reaching consequences. Consider two cases: One IT attack and one OT compromise. If IT systems are down, staff can be sent home or assigned other tasks while problems are identified and issues are remediated, but production remains on pace. If OT systems fail, meanwhile, manufacturing operations come to standstill. Lacking visibility into OT inventories makes it more difficult for teams to both discover where compromise occurred and determine the best way to remediate the issue.

As a result, controlling for compromise is the second step of Purdue 2.0. RedSeal makes it possible to see what you’re missing. By pulling in data from hundreds of connected tools and sensors and then importing this data into scan engines — such as Tenable — RedSeal can both identify vulnerabilities and provide context for these weak points. Equipped with data about devices themselves, including manufacturing and vendor information, along with metrics that reflect current performance and behavior, companies are better able to discover vulnerabilities and close critical gaps before attackers can exploit OT operations.

Put simply? Companies can’t defend what they can’t see. This means that while the Purdue diagram remains a critical component of CIM success, after 30 years in business, it needs an update. RedSeal can help companies bring OT functions in line with IT frameworks by discovering all devices on the network, pinpointing potential vulnerabilities, and identifying ways to improve OT security.