Tag Archive for: network mapping

Purdue 2.0: Exploring a New Model for IT/OT Management

Developed in 1992 by Theodore J. Williams and the Purdue University Consortium, the Purdue diagram — itself a part of the Purdue Enterprise Reference Architecture (PERA) — was one of the first models used to map data flows in computer-integrated manufacturing (CIM).

By defining six layers that contain both information technology (IT) and operational (OT) technology, along with a demilitarized zone (DMZ) separating them, the Purdue diagram made it easier for companies to understand the relationship between IT and OT technologies and establish effective access controls to limit total risk.

As OT technologies have evolved to include network-enabled functions and outward-facing connections, however, it’s time for companies to prioritize a Purdue update that puts security front and center.

The Problem with Purdue 1.0

A recent Forbes piece put it simply: “The Purdue model is dead. Long live, Purdue.”

This paradox is plausible, thanks to the ongoing applicability of Purdue models. Even if they don’t quite match the reality of IT and OT deployments, they provide a reliable point of reference for both IT and OT teams.

The problem with Purdue 1.0 stems from its approach to OT as devices that have MAC addresses but no IP addresses. Consider programmable logic controllers (PLCs). These PLCs typically appear on MAC addresses in Layer 2 of a Purdue diagram. This need for comprehensive visibility across OT and IT networks, however, has led to increased IP address assignment across PLCs, in turn making them network endpoints rather than discrete devices.

There’s also an ongoing disconnect between IT and OT approaches. Where IT teams have spent years looking for ways to bolster both internal and external network security, traditional OT engineers often see security as an IT-only problem. The result is IP address assignment to devices but no follow-up on who can access the devices and for what purpose. In practice, this limits OT infrastructure visibility while creating increased risk and security concerns, especially as companies are transitioning more OT management and monitoring to the cloud.

Adopting a New Approach to Purdue

As noted above, the Purdue diagram isn’t dead, but it does need an update. Standards such as ISA/IEC 62443 offer a solid starting point for computer-integrated manufacturing frameworks, with a risk-based approach that assumes any device can pose a critical security risk and that all classes of devices across all levels must be both monitored and protected. Finally, it takes the position that communication between devices and across layers is necessary for companies to ensure CIM performance.

This requires a new approach to the Purdue model that removes the distinction between IT and OT devices. Instead of viewing these devices as separate entities on a larger network, companies need to recognize that the addition of IP addresses in Layer 2 and even Layer 1 devices creates a situation where all devices are equally capable of creating network compromise or operational disruption.

In practice, the first step of Purdue 2.0 is complete network mapping and inventory. This means discovering all devices across all layers, whether they have a MAC address, IP address, or both. This is especially critical for OT devices because, unlike their IT counterparts, they rarely change. In some companies, ICS and SCADA systems have been in place for 5, 10, even 20 years or more, while IT devices are regularly replaced. As a result, once OT inventory is completed, minimal change is necessary. Without this inventory, however, businesses are flying blind.

Inventory assessment also offers the benefit of in-depth metric monitoring and management. By understanding how OT devices are performing and how this integrates into IT efforts, companies can streamline current processes to improve overall efficiency.

Purdue Diagram

 

Controlling for Potential Compromise

The core concept of evolving IT/OT systems is interconnectivity. Gone are the days of Level 1 and  2 devices capable only of internal interactions, while those on Levels 3, 4, and 5 connect with networks at large. Bolstered by the adoption of the Industrial Internet of Things (IIoT), continuous connectivity is par for the course.

The challenge? More devices create an expanding attack surface. If attackers can compromise databases or applications, they may be able to move vertically down network levels to attack connected OT devices. Even more worrisome is the fact that since these OT devices have historically been one step removed from internet-facing networks, businesses may not have the tools, technology, or manpower necessary to detect potential vulnerabilities that could pave the way for attacks.

It’s worth noting that these OT vulnerabilities aren’t new — they’ve always existed but were often ignored under the pretense of isolation. Given the lack of outside-facing network access, they often posed minimal risk, but as IIoT becomes standard practice, these vulnerabilities pose very real threats.

And these threats can have far-reaching consequences. Consider two cases: One IT attack and one OT compromise. If IT systems are down, staff can be sent home or assigned other tasks while problems are identified and issues are remediated, but production remains on pace. If OT systems fail, meanwhile, manufacturing operations come to standstill. Lacking visibility into OT inventories makes it more difficult for teams to both discover where compromise occurred and determine the best way to remediate the issue.

As a result, controlling for compromise is the second step of Purdue 2.0. RedSeal makes it possible to see what you’re missing. By pulling in data from hundreds of connected tools and sensors and then importing this data into scan engines — such as Tenable — RedSeal can both identify vulnerabilities and provide context for these weak points. Equipped with data about devices themselves, including manufacturing and vendor information, along with metrics that reflect current performance and behavior, companies are better able to discover vulnerabilities and close critical gaps before attackers can exploit OT operations.

Put simply? Companies can’t defend what they can’t see. This means that while the Purdue diagram remains a critical component of CIM success, after 30 years in business, it needs an update. RedSeal can help companies bring OT functions in line with IT frameworks by discovering all devices on the network, pinpointing potential vulnerabilities, and identifying ways to improve OT security.

Zero Trust: Back to Basics

The Executive Order on Improving the Nation’s Cybersecurity in 2021 requires agencies to move towards zero trust in a meaningful way as part of modernizing infrastructure. Yet, federal agencies typically find it challenging to implement zero trust. While fine in theory, the challenge often lies in the legacy systems and on-premises networks that exist with tendrils reaching into multiple locations, including many which are unknown.

Identity management and authentication tools are an important part of network security, but before you can truly implement zero trust, you need an understanding of your entire infrastructure. Zero trust isn’t just about identity. It’s also about connectivity.

Take a quick detour here. Let’s say you’re driving a tractor-trailer hauling an oversized load. You ask Google Maps to take you the fastest route and it plots it out for you. However, you find that one of the routes is a one-lane dirt road and you can’t fit your rig. So, you go back to your mapping software and find alternate routes. Depending on how much time you have, the number of alternative pathways to your final destination is endless.

Computer security needs to think this way, too. Even if you’ve blocked the path for threat actors in one connection, how else could they get to their destination? While you may think traffic only flows one way on your network, most organizations find there are multiple pathways they never knew (or even thought) about.

To put in efficient security controls, you need to go back to basics with zero trust. That starts with understanding every device, application, and connection on your infrastructure.

Zero Trust Embodies Fundamental Best-Practice Security Concepts

Zero trust returns to the basics of good cybersecurity by assuming there is no traditional network edge. Whether it’s local, in the cloud, or any combination of hybrid resources across your infrastructure, you need a security framework that requires everyone touching your resources to be authenticated, authorized, and continuously validated.

By providing a balance between security and usability, zero trust makes it more difficult for attackers to compromise your network and access data. While providing users with authorized access to get their work done, zero-trust frameworks prevent unauthorized access and lateral movement.

By properly segmenting your network and requiring authentication at each stage, you can limit the damage even if someone does get inside your network. However, this requires a firm understanding of every device and application that are part of your infrastructure as well as your users.

Putting Zero Trust to Work

The National Institute of Standards and Technology (NIST) Risk Management Framework publication 800-207 provides the conceptual framework for zero trust that government agencies need to adopt.

The risk management framework has seven steps:

  1. Prepare: mapping and analyzing the network
  2. Categorize: assess risk at each stage and prioritize
  3. Select: determine appropriate controls
  4. Implement: deploy zero trust solutions
  5. Assess: ensure solutions and policies are operating as intended
  6. Authorize: certify systems and workflow are ready for operation
  7. Monitor: provide continuous monitoring of security posture

In NIST’s subsequent draft white paper on planning for a zero-trust architecture, it reinforces the crucial first step, which is mapping the attack surface and identifying the key parts that could be targeted by a threat actor.

Instituting zero trust security requires detailed analysis and information gathering on devices, applications, connectivity, and users. Only when you understand how data moves through your network and all the different ways it can move through your network can you implement segmentation and zero trust.

Analysts should identify options to streamline processes, consolidate tools and applications, and sunset any vulnerable devices or access points. This includes defunct user accounts and any non-compliant resources.

Use Advanced Technology to Help You Perform Network Analysis

Trying to map your network manually is nearly impossible. No matter how many people you task to help and how long you have, things will get missed. Every device, appliance, configuration, and connection has to be analyzed. Third parties and connections to outside sources need to be evaluated. At the same time you’re conducting this inventory, things are in a constant state of change which makes it even easier to miss key components.

Yet, this inventory is the foundation for implementing zero trust. If you miss something, you leave security gaps within your infrastructure.

The right network mapping software for government agencies can automate this process by going out and gathering the information for you. Net mapping analysis can calculate every possible pathway through the network, taking into account NATS messaging and load balancing. During this stage, most organizations uncover a surprising number of previously unknown pathways. Each connection point needs to be assessed for need and whether it can be closed to reduce attack surfaces.

Automated network mapping will also provide an inventory of all the gear on your network and IP space in addition to your cloud and software-defined network (SDN) assets. Zero trust requires you to identify who and what can access your network, and who should have that access.

Once you have conducted this exhaustive inventory, you can then begin to implement the zero-trust policies with confidence.

Since your network is in a constant state of evolution with new users, devices, applications, and connectivity being added, changed, or revised, you also need continuous monitoring of your network infrastructure to ensure changes remain compliant with your security policies.

Back to the Basics

The conversation about zero trust often focuses narrowly on identity. Equally important are device inventory and connectivity. The underlying goal of zero trust is allowing only specific authorized individuals to access specific things on specific devices. Before you can put in place adequate security controls, you need to know about all of the devices and all the connections.

RedSeal provides network mapping, inventory, and mission-critical security and compliance services for government agencies and businesses and is Common Criteria certified. To learn more about implementing a zero-trust framework, you need to better understand the challenges and strategies for successful zero-trust implementation.

Download our Zero Trust Guide today to get started.

Zero Trust: Shift Back to Need to Know

Cyberattacks on government agencies are unrelenting. Attacks on government, military, and contractors rose by more than 47% in 2021 and can continue to climb. Today’s cybercriminals, threat actors, and state-sponsored hackers have become more sophisticated and continue to target government data and resources.

The recent Executive Order on Improving the Nation’s Cybersecurity directs federal agencies to take decisive action and work with the private sector to improve cybersecurity. The EO puts it bluntly:

“The United States faces persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy. The Federal Government must improve its efforts to identify, deter, protect against, detect, and respond to these actions and actors.”

The Office of Management and Budget (OMB) also issued a memorandum for agencies to improve investigative and remediation capabilities, including:

  • Centralizing access and visibility
  • More defined logging, log retention, and log management
  • Increased information sharing
  • Accelerate incident response efforts
  • More effective defense of information

In light of continued cyber-attacks, the EO requires bold and significant investments to protect and secure systems and data. This represents a cultural shift from a somewhat relaxed security environment created over time as legacy systems continued to grow and migrate legacy systems to cloud resources.

Security concerns only grew with the rapid shift to remote work. Agencies had to scramble to redefine infrastructure to accommodate remote workers, which significantly increased the attack surface.

For governmental agencies, hardening security requires a return to “need to know” using zero trust security protocols.

Zero Trust Security: What Is It?

Zero trust is a security framework that requires authentication and authorization for all users on the network. Traditionally, networks have focused on security at the edge, managing access points. However, once someone penetrated the security framework, threat actors were able to access additional network resources. As a result, many attackers were able to escalate privileges and escalate the damage they caused.

Zero trust requires users to be re-authorized at every connection to prevent unauthorized and lateral movement for users on the network. This prevents access to resources except for those with a need to know and need to access.

Current Cloud Security Measures Can Fall Short

The rising adoption of cloud services has changed the makeup of most agency infrastructures. Currently, lax cloud security measures can expose organizations to risk and harm and incremental improvements are not keeping pace.

Factors that leave openings for threat actors include:

  • Gaps in information technology (IT) expertise and challenges in hiring
  • Problems with cloud migration
  • Unsecured application programming interfaces (APIs)
  • Vulnerabilities in third-party providers
  • The complexity of security in multi-cloud and hybrid cloud environments

Zero trust is an important weapon in the battle against cyber threats, yet there has not been universal adoption. The recent Cost of a Data Breach report from the Ponemon Institute reports that only 35% of organizations employ a zero-trust framework as part of the cybersecurity protocols. This leaves agencies and businesses open for attacks.

Besides protecting networks and data, there’s also a significant financial benefit for deploying zero trust. While breaches can still occur even when zero trust is in place, the average cost to mitigate breaches for organizations with a secure zero trust framework was $1.76 million less than those without zero trust deployment.

Zero Trust and the Return to Need to Know

Intelligence agencies have employed the practice of “need to know” for years. Sensitive and confidential data is restricted to only those that have a specific need for access. In cybersecurity, zero trust includes the concept of least privilege, which only allows users access to the information and resources they need to do their job.

Contrast the zero trust with the practice of edge security which is in wide use today. Edge security is like putting a security perimeter around the outside of your home or building. Once inside the perimeter, visitors are free to move from room to room. The principle of least privilege only gives them access to the rooms—and things within each room—if they have a need to know.

With zero trust in place, visitors won’t even be able to see the room unless they are authorized for access.

Building a Zero Trust Architecture

Building a zero-trust architecture requires an understanding of your infrastructure, applications, and users. By mapping your network, you can see how devices and applications connect and pathways where security is needed to prevent unauthorized access.

A zero-trust approach requires organizations to:

  • Verify and authenticate every interaction, including user identity, location, device integrity, workload, and data classification
  • Use the principle of least privilege using just-in-time and just-enough-access (JIT/JEA) with adaptive risk policies
  • Remove implicit trust when devices or applications talk to each other along with instituting robust device access control
  • Assume breach and employ micro-segmentation to prevent lateral movement on a need-to-know basis.
  • Implement proactive threat prevention, detection, and mitigation

Mitigating Insider Threats

Zero trust also helps mitigates threats from insiders by restricting access to non-authorized resources and logging activity within the network.

When we think about data breaches, we generally think about threat actors from outside our network, but there’s also a significant threat from insiders. The 2021 Data Breach Investigations Report (DBIR) from Verizon suggests that as many as 22% of all data breaches occur from insiders.

According to the Government Accounting Office (GAO), risks to IT systems are increasing, including insider threats from witting and unwitting employees.

Managing Complex Network Environments

As organizations have grown, network environments have become incredibly complex. You need a deep understanding of all of the appliances, applications, devices, public cloud, private cloud, multi-cloud, and on-premises resources and how they are connected.

RedSeal automatically maps your infrastructure and provides a comprehensive, dynamic visualization. With RedSeal, you can identify any exposed resources in the cloud, visualize access across your network, demonstrate network compliance and configuration standards, and prioritize vulnerability for mitigation.

For more information about implementing zero trust for your organization, download the complimentary RedSeal Guide: Tips for Implementing Zero Trust. Learn about the challenges and get insights from the security professionals at RedSeal.