Tag Archive for: IT

Expert Insights: Building a World-Class OT Cybersecurity Program

In an age where manufacturing companies are increasingly reliant on digital technologies and interconnected systems, the importance of robust cybersecurity programs cannot be overstated. While attending Manusec in Chicago this week, RedSeal participated on a panel of cybersecurity experts to discuss the key features, measurement of success, and proactive steps that can lead to a more mature OT (Operational Technology) cybersecurity posture for manufacturing companies. This blog provides insights and recommendations from CISOs and practitioners from Revlon, AdvanSix, Primient, Fortinet, and our own Sean Finn, Senior Global Solution Architect for RedSeal.

Key features of a world-class OT cybersecurity program

The panelists brought decades of experience encompassing a wide range of manufacturing and related vendor experience and the discussion centered around three main themes, all complemented by a set of organizational considerations:

  • Visibility
  • Automation
  • Metrics


The importance of having an accurate understanding of the current network environment.

The panel unanimously agreed – visibility, visibility, visibility – is the most critical first step to securing the network. The quality of an organization’s “situational awareness” is a critical element towards both maximizing the availability of OT systems and minimizing the operational frictions related to incident response and change management.

Legacy Element Management Systems may not be designed to provide visibility of all the different things that are on the network. The importance of having a holistic view of their extended OT environment was identified in both proactive and reactive contexts.

The increasingly common direct connectivity between Information Technology (IT) and Operational Technology (OT) environments increases the importance of understanding the full scope of available access – both inbound and outbound.


Automation and integrations are key components for improving both visibility and operational efficiency.  

  • Proactive assessment and automated detection: Implement proactive assessment measures to detect and prevent segmentation violations, enhancing the overall security posture.
  • Automated validation: Protecting legacy technologies and ensuring control over IT-OT access portals are essential. Automated validation of security segmentation helps in protecting critical systems and data.
  • Leveraging system integration and automation: Continue to invest in system integration and automation to streamline security processes and responses.


Measuring and monitoring OT success and the importance of a cybersecurity framework for context. 

One result of the ongoing advancement of technology is that almost anything within an OT environment can be measured.

While there are multiple “cybersecurity frameworks,” the panel was in strong agreement that it is important to leverage a cybersecurity framework to ensure that you have a cohesive view of your environment.  By doing so, organizations will be better-informed regarding cybersecurity investments and resource allocation.

It also helps organizations prioritize and focus on the most critical cybersecurity threats and vulnerabilities.

The National Institute of Standards and Technology (NIST) cybersecurity framework was most commonly identified by practioners in the panel.

Cybersecurity metric audiences and modes 

Different metrics may be different for very different roles. Some metrics are valuable for internal awareness and operational considerations, which are separate from the metrics and “KPIs” that are consumed externally, as part of  “evidencing effectiveness northbound.”

There are also different contexts for measurements and monitoring:

  • Proactive metrics/monitoring: This includes maintaining operational hygiene and continuously assessing the state of proactive analytics systems. Why would a hack want to get in? What is at risk and why does it matter to the organization? 
  • Reactive metrics/monitoring: Incident detection, response, and resolution times are crucial reactive metrics. Organizations should also regularly assess the state of reactive analytics systems. 
  • Reflective analysis: After incidents occur, conducting incident post-mortems, including low-priority incidents, can help identify systemic gaps and process optimization opportunities. This reflective analysis is crucial for learning from past mistakes and improving security. 

 Organizational Considerations 

  1. Cybersecurity risk decisions should be owned by people responsible, and accountable for cybersecurity.
  2. Collaboration with IT: OT and IT can no longer operate in isolation. Building a strong working relationship between these two departments is crucial. Cybersecurity decisions should align with broader business goals, and IT and OT teams must collaborate effectively to ensure security.
  3. Employee training and awareness: Invest in ongoing employee training and awareness programs to ensure that every member of the organization understands their role in maintaining cybersecurity.

Establishing a world-class OT cybersecurity program for manufacturing companies is an evolving process that requires collaboration, automation, proactive measures, and continuous improvement. By focusing on visibility, collaboration, and a commitment to learning from incidents, organizations can build a strong foundation for cybersecurity in an increasingly interconnected world.

Contact RedSeal today to discuss your organizational needs and discover how RedSeal can provide unparalleled visibility into your OT / IT environments.

The Shifting Landscape of Cybersecurity: Top Considerations for CISOs

1. AI Is Changing the Game

The increasing use of generative AI tools such as ChatGPT comes with both defensive and offensive impacts. On the defensive side, companies can leverage these solutions to analyze security data in real time and provide recommendations for incident response and security vendors developers can write code faster. As for the offensive impact, attackers may be able to optimize malware coding using these same AI tools or leverage code released unknowingly by a security vendor’s developer. If malicious actors can hide compromising code in plain sight, AI solutions may not recognize the potential risk. And if hackers ask generative AI to circumvent network defenses leveraging code released unknowingly, the impact could be significant.

As a result, according to The Wall Street Journal & Forbes, JPMorgan Chase, Amazon, Bank of America, Citigroup, Deutsche Bank, Goldman Sachs and Wells Fargo are limiting employees’ ChatGPT use and we expect to see other companies follow.

2. Market Forces Are Shaping Security and Resilience

The looming economic recession is shaping corporate practices around security and resilience. While many IT teams will see their budgets unchanged or even increased in 2023 compared to 2022, security professionals should also expect greater oversight from C-suite executives, including chief information officers (CIOs), chief information security officers (CISOs), and chief financial officers (CFOs).

Both CIOs and CISOs will expect teams to justify their spending rather than simply giving them a blank slate for purchasing, even if the budget is approved. CFOs, meanwhile, want to ensure that every dollar is accounted for and that security solutions are helping drive business return on investment.

Consider network and cloud mapping solutions that help companies understand what’s on their network, where, and how it’s all connected. From an information security perspective, these tools have value because they limit the frequency and severity of IT incidents. But from a CFO perspective, the value of these tools ties to their ability to save money by avoiding the costs that come with detection, remediation, and the potential reputation fallout that occurs if customer data is compromised and acts as a force multiplier across multiple teams.

3. Multiple Vendor Architecture Is Everywhere

Firewall options from cloud vendors do not meet the enterprise’s security requirement. Enterprises are deploying traditional firewalls (ex. Palo Alto Network, Cisco or Fortinet) in their clouds. They are using cloud workload protection tools from vendors such as Crowdstrike or SentinelOne.

On-premises or cloud deployments cannot be treated in a silo. An adversary could get in from anywhere and go anywhere. The infrastructure has to be treated as one with proper segmentation. Pure-play cloud companies are also switching to on-premises collocated data centers to save on their rising cloud costs.

4. Public Oversight Impacts Private Operations

The recently announced National Cybersecurity Strategy takes aim at current responsibilities and long-term investments. According to the Strategy, there must be a rebalancing of responsibilities to defend cyberspace that shifts away from individuals and small businesses and “onto the organizations that are the most capable and best-positioned to reduce risks for all of us.” The strategy also recommends that businesses balance short- and long-term security investments to provide sustained defense over time.

To help companies achieve these goals, the Cybersecurity and Infrastructure Security Agency (CISA) recently released version 1.0.1 of its cross-sector cybersecurity performance goals (CPGs). Many of these goals fall under the broader concept of “security hygiene,” basic tasks that all companies should complete regularly but that often slip through the cracks.

For example, CPG 2.F recommends that companies use network segmentation to limit the impact of Indicator of Compromise (IOC) events. CPG 1.A, meanwhile, suggests that companies inventory all IT and OT assets in use, tag them with unique identifiers, and update this list monthly.

While no formal announcements have been made, it’s possible that under the new strategy, CISA will shift from providing guidance to enforcing regulatory expectations. For example, FDA may mandate pharmaceutical companies to submit their compliance to CISA CPGs.

5. IT and OT Meet in the Middle

RSA 2023 also touched on the continued merger of IT and OT environments. For many companies, this is a challenging shift. While IT solutions have been navigating the public/private divide for years, many OT frameworks are still not designed to handle this level of connectivity.

The result? A rapidly increasing attack surface that offers new pathways of compromise. Consider an industrial control system (ICS) or supervisory control and data acquisition (SCADA) system that was historically air-gapped but now connects to internal IT tools, which in turn connect to public cloud frameworks. If attackers are able to compromise the perimeter and move laterally across IT environments into OT networks, they will be able to encrypt or exfiltrate customers’ personal and financial data. Given the use of trusted credentials to access these systems, it could be weeks or months before companies notice the issue.

To mitigate the risks, businesses are looking for ways to segment IT and OT plus continuously validate segmentation policies are met. This starts with the discovery and classification of OT devices along with the development of standards-based security policies for both IT and OT functions. These two networks serve different aims and need to avoid the risk of any lateral movement between the networks.

Old, New, and Everything in Between

OT threats are on the horizon, companies need to prioritize basic security hygiene, and economic downturns are impacting IT budgets. These familiar frustrations, however, are met by the evolution of AI tools and the development of new national strategies to combat emerging cyber threats. As we look towards the second half of the year, the lessons learned can help companies better protect what they have and prepare for the next generation of cybersecurity threats. Take on the new cybersecurity landscape with RedSeal. Reach out to see how we can help you. 

Purdue 2.0: Exploring a New Model for IT/OT Management

Developed in 1992 by Theodore J. Williams and the Purdue University Consortium, the Purdue diagram — itself a part of the Purdue Enterprise Reference Architecture (PERA) — was one of the first models used to map data flows in computer-integrated manufacturing (CIM).

By defining six layers that contain both information technology (IT) and operational (OT) technology, along with a demilitarized zone (DMZ) separating them, the Purdue diagram made it easier for companies to understand the relationship between IT and OT technologies and establish effective access controls to limit total risk.

As OT technologies have evolved to include network-enabled functions and outward-facing connections, however, it’s time for companies to prioritize a Purdue update that puts security front and center.

The Problem with Purdue 1.0

A recent Forbes piece put it simply: “The Purdue model is dead. Long live, Purdue.”

This paradox is plausible, thanks to the ongoing applicability of Purdue models. Even if they don’t quite match the reality of IT and OT deployments, they provide a reliable point of reference for both IT and OT teams.

The problem with Purdue 1.0 stems from its approach to OT as devices that have MAC addresses but no IP addresses. Consider programmable logic controllers (PLCs). These PLCs typically appear on MAC addresses in Layer 2 of a Purdue diagram. This need for comprehensive visibility across OT and IT networks, however, has led to increased IP address assignment across PLCs, in turn making them network endpoints rather than discrete devices.

There’s also an ongoing disconnect between IT and OT approaches. Where IT teams have spent years looking for ways to bolster both internal and external network security, traditional OT engineers often see security as an IT-only problem. The result is IP address assignment to devices but no follow-up on who can access the devices and for what purpose. In practice, this limits OT infrastructure visibility while creating increased risk and security concerns, especially as companies are transitioning more OT management and monitoring to the cloud.

Adopting a New Approach to Purdue

As noted above, the Purdue diagram isn’t dead, but it does need an update. Standards such as ISA/IEC 62443 offer a solid starting point for computer-integrated manufacturing frameworks, with a risk-based approach that assumes any device can pose a critical security risk and that all classes of devices across all levels must be both monitored and protected. Finally, it takes the position that communication between devices and across layers is necessary for companies to ensure CIM performance.

This requires a new approach to the Purdue model that removes the distinction between IT and OT devices. Instead of viewing these devices as separate entities on a larger network, companies need to recognize that the addition of IP addresses in Layer 2 and even Layer 1 devices creates a situation where all devices are equally capable of creating network compromise or operational disruption.

In practice, the first step of Purdue 2.0 is complete network mapping and inventory. This means discovering all devices across all layers, whether they have a MAC address, IP address, or both. This is especially critical for OT devices because, unlike their IT counterparts, they rarely change. In some companies, ICS and SCADA systems have been in place for 5, 10, even 20 years or more, while IT devices are regularly replaced. As a result, once OT inventory is completed, minimal change is necessary. Without this inventory, however, businesses are flying blind.

Inventory assessment also offers the benefit of in-depth metric monitoring and management. By understanding how OT devices are performing and how this integrates into IT efforts, companies can streamline current processes to improve overall efficiency.

Purdue Diagram


Controlling for Potential Compromise

The core concept of evolving IT/OT systems is interconnectivity. Gone are the days of Level 1 and  2 devices capable only of internal interactions, while those on Levels 3, 4, and 5 connect with networks at large. Bolstered by the adoption of the Industrial Internet of Things (IIoT), continuous connectivity is par for the course.

The challenge? More devices create an expanding attack surface. If attackers can compromise databases or applications, they may be able to move vertically down network levels to attack connected OT devices. Even more worrisome is the fact that since these OT devices have historically been one step removed from internet-facing networks, businesses may not have the tools, technology, or manpower necessary to detect potential vulnerabilities that could pave the way for attacks.

It’s worth noting that these OT vulnerabilities aren’t new — they’ve always existed but were often ignored under the pretense of isolation. Given the lack of outside-facing network access, they often posed minimal risk, but as IIoT becomes standard practice, these vulnerabilities pose very real threats.

And these threats can have far-reaching consequences. Consider two cases: One IT attack and one OT compromise. If IT systems are down, staff can be sent home or assigned other tasks while problems are identified and issues are remediated, but production remains on pace. If OT systems fail, meanwhile, manufacturing operations come to standstill. Lacking visibility into OT inventories makes it more difficult for teams to both discover where compromise occurred and determine the best way to remediate the issue.

As a result, controlling for compromise is the second step of Purdue 2.0. RedSeal makes it possible to see what you’re missing. By pulling in data from hundreds of connected tools and sensors and then importing this data into scan engines — such as Tenable — RedSeal can both identify vulnerabilities and provide context for these weak points. Equipped with data about devices themselves, including manufacturing and vendor information, along with metrics that reflect current performance and behavior, companies are better able to discover vulnerabilities and close critical gaps before attackers can exploit OT operations.

Put simply? Companies can’t defend what they can’t see. This means that while the Purdue diagram remains a critical component of CIM success, after 30 years in business, it needs an update. RedSeal can help companies bring OT functions in line with IT frameworks by discovering all devices on the network, pinpointing potential vulnerabilities, and identifying ways to improve OT security.

Calling in the security experts – your network engineers

I’ve talked about the need to consider your network as the key to improving cyber defenses.  Here’s why.

Today’s attacks are “system-level”, supplanting specific server or host exploitations.  Cybercriminals today develop sophisticated attack strategies by:

  1. Finding PATHWAYS INTO the network through phishing emails, third parties, or other creative ways.
  2. MOVING MALWARE AROUND the network while masquerading as legitimate traffic.
  3. Identifying legitimate PATHWAYS OUT.
  4. Exfiltrating company assets through these pathways.

Notice this is all about TRAFFIC and PATHWAYS, and who knows the most about these?   Your network team.

They know your network and why it is built the way it is.   What is their priority?    Performance and uptime.   They have a wealth of tools that already help them manage to these priorities.  So if a security solution gave them additional knowledge about their network that helped manage performance and uptime, they would likely embrace and use it.  Although they are now working with firewalls and other security devices by necessity, they still focus on performance.  They’ve segmented the network for management and performance reasons, but are now expected to further segment for security.

And they care about one other thing:  Access.   Access to data and applications by their end users.

Access?  Pathways?  This is EXACTLY what attackers are exploiting.

So your best bet to combat cybercrime?  Bring in the experts who know about access in your network, and leverage their knowledge and experience.

Securing Your Network, or Networking for Security?

Every day we hear about another breach, and most of the time the information we get is fairly consistent – the breach started and finished long before it was discovered.    It’s not always clear exactly how or where the attackers were able to get access because they’ve had ample time to cover their tracks.   Whatever log or history data we have is massive, and sifting through it to figure out anything about the attack is very difficult and time consuming.  We don’t quite know what we’re looking for and much of the evidence has come and gone.

As I survey the cybersecurity market and media coverage, I notice that:

  1.   We’ve thrown in the towel, it’s “not if, but when” you’ll be breached.
  2.   Many security vendors are now talking about analytics, dashboards, and big data instead of prevention.

person-thinking-networkNotably absent is the acknowledgement that the attack did not happen at a single point or computer, and that the actual theft of data was allowed because the data looked like legitimate network traffic using allowed routes through and out of the network.

We hear a lot about not having enough “security expertise”.  Is that really the problem?  Or is the problem that the security experts don’t really understand the full complexity of their networks?  The network experts understand.  These attacks are happening via network traffic – not on a device, nor with a known signature.   And what do networking professionals care about?  Traffic, and how it’s flowing.   I maintain that there’s a lot more expertise that could help in this breach analysis and prevention than we think – we’re just not asking the right people.

In subsequent posts I’ll talk about why the networking team is becoming vital to security efforts, and why understanding how a network is constructed and performs is the best chance we have of improving our defenses.

Somewhere Over the Spreadsheet

Two years ago I was standing in front of a group of security geeks in Santa Barbara for BSides LA talking about the sophisticated tools that most network engineers use — like “ping” and “traceroute” and even Excel — and about how the broad range of tools available typically didn’t get used in a primordial jungle of our enterprise networks. Recently, Wired concurred, outlining the widespread use of spreadsheets for a broad range of business functions.

new-spreadsheet1It is embarrassingly common for us to find the majority of network management information in spreadsheets. Lists of devices, lists of firewall rules, hierarchies of networks. All laid out in nicely formatted tabs within multiple spreadsheet workbooks, often stored in SharePoint or Google Docs. But, always, devoid of context and the real meaning of the elements.

This isn’t to say that there isn’t a place for spreadsheets, of course, but I would challenge you to think through how you are using them and whether or not they are giving you the information you need to know rather than believe what your network is really doing.

For example, a couple years ago I was visiting a major retailer as they were working through their PCI audit. They presented the auditor with an annotated spreadsheet containing all of the firewall rules within their infrastructure. The auditor, for his part, recognized that evaluating firewall rules out of context masks the reality of the way a network operates, and asked to review the PCI zones using RedSeal. The insights for the organization and the auditor were rapid and clear, and the organization was able to take steps to improve their overall security as a result.

So, although spreadsheets are valuable for building lists of the “stuff” that makes up your environment, they are no substitute for automation that can show you and tell you what you don’t know you don’t know. What do you keep in spreadsheets? What do you wish your spreadsheets could tell you? What’s the strangest experience you’ve had with spreadsheets?