CyberScoop Radio | June 17, 2019
By Kes Jecius, RedSeal Senior Consulting Engineer
The Center for Internet Security’s (CIS) third control for implementing a cybersecurity program is to practice continuous vulnerability management. Organizations that identify and remediate vulnerabilities on an on-going basis will significantly reduce the window of opportunity for attackers. This third control assumes you’ve implemented the first two CIS framework controls — understanding both the hardware that makes up your infrastructure and the software that runs on that infrastructure.
The first two controls are important to your vulnerability management program. When you know what hardware assets you have, you can validate that you’re scanning all of them for vulnerabilities. As you update your IT inventory, you can include new assets in the scanning cycle and remove assets that no longer need to be scanned. And, when you know what software run on your infrastructure, you can understand which assets are more important. An asset’s importance is key to identifying what should be remediated first.
Most vulnerability scanning platforms allow you to rank the importance of systems being scanned. They prioritize vulnerabilities by applying the CVSS (Common Vulnerability Scoring System) score for each vulnerability on an asset and couple it with the asset’s importance to develop a risk score.
The dimension missing from this risk scoring process is understanding if attackers can reach the asset to compromise it. Although you are remediating vulnerabilities, you can still be vulnerable to attacks if what you’re remediating isn’t accessible by an attacker. It may be protected by firewalls and other network security measures. Knowledge of the network security controls already deployed would allow the vulnerability management program to improve its prioritization efforts to focus on high value assets with exposed vulnerabilities that can be reached from an attacker’s location.
Other vulnerability scanning and risk rating platforms use threat management data to augment their vulnerability risk scoring process. While threat management data (exploits actively in use across the world) adds value, it doesn’t incorporate the network accessibility dimension into evaluating that risk.
As you work to improve your vulnerability management program, it’s best to use all the information available to focus remediation efforts. Beyond CVSS scores, the following elements can improve most programs:
- Information from network teams on new and removed subnets (IP address spaces) to make sure that all areas of the infrastructure are being scanned.
- Information from systems teams on which systems are most important to your organization.
- Including network information in the risk scoring process to determine if these systems are open to compromise.
Although no single product can be the solution for implementing and managing all CIS controls, look for products that provide value in more than one area and integrate with your other security solutions. RedSeal, for example, is a foundational solution that provides significant value for meeting your vulnerability management goals by providing network context to existing vulnerability scanning information. Additionally, RedSeal provides pre-built integrations with many security products and easy integration with others via its REST API interface.
Download the RedSeal CIS Controls Solution Brief to find out more about how RedSeal can help you implement your program using the CIS Controls.
By Kes Jecius, RedSeal Senior Consulting Engineer
The Center for Internet Security’s (CIS) first control for implementing a cybersecurity program is to understand and manage the hardware assets that make up your IT infrastructure. These hardware assets consist of network devices, servers, workstations, and other computing platforms. This is a difficult goal to achieve, further complicated by the increasing use of virtualized assets, such as public and/or private cloud, Software as a Service (SaaS), and virtualized servers.
In the past, inventorying these assets was relatively simple. When it came in the door, the physical device was given an inventory tag and entered into an asset management system. The asset management system was controlled by the finance group, primarily so assets could be depreciated for accounting records. As the IT world matured, we saw the advent of virtualized systems where a single box could be partitioned into multiple systems or devices. Further evolution in IT technology brought us cloud-based technologies, where a company no longer has a physical box to inventory. Network services are configured and servers are created dynamically. Hence the daunting task of trying to create and manage the IT inventory of any company.
CIS recognizes this and recommends using both active and passive discovery tools to assist. Since no human can keep up with this inventory of physical and virtual devices, discovery tools can help present an accurate picture of IT assets.
Active discovery tools leverage network infrastructure to identify devices by some form of communication to the device. Network teams are generally opposed to these tools because they introduce extra network traffic. Tools that attempt to “ping” every possible IP address are not efficient. They are also identified as potential security risks, since this is the same behavior that hackers generally use. Newer discovery strategies have evolved that are significantly more network friendly yet do a good job identifying the devices in your IT infrastructure. These newer, active discovery strategies target specific network IP addresses to gather information about a single device. When the information is processed, it can reveal information about other devices in the network.
Passive discovery tools are placed on the network to listen and parse traffic to identify all devices. Passive discovery tools do not add significantly to network traffic, but they need to be placed correctly to capture data. Some computing devices may never be identified because they are infrequently used, or their traffic never passes by a passive discovery tool. Newer passive discovery tools can integrate information with active discovery tools.
Most organizations need a combination of discovery tools. Active discovery tools should minimize their impact to the network and the devices they communicate with. Passive discovery tools can discover unknown devices. IT groups can do a gap analysis between the two tools to assess what is under management and what isn’t (frequently referred to as Shadow IT). This combined approach will provide the best strategy for understanding and managing all assets that make up an IT infrastructure.
Without this first step, having visibility into what these IT assets are and how they are connected, the remaining CIS controls can only be partially effective in maturing your cybersecurity strategy.
Although no single product can be the solution for implementing and managing all CIS controls, look for products that provide value in more than one area and integrate with your other security solutions. RedSeal, for example, is a foundational solution that provides significant value for meeting the first control, while providing benefit to implementing many of the other controls that make up the CIS Control framework. Additionally, RedSeal provides pre-built integrations with many security products and easy integration with others via its REST API interface.
Download the RedSeal CIS Controls Solution Brief to find out more about how RedSeal can help you implement your program using the CIS Controls.
By Kes Jecius, Senior Consulting Engineer
I have the privilege of working with security groups at many different enterprise companies. Each of them is being bombarded by many different vendors who offer security solutions. No surprise, the common estimate is that there are approximately 2,000 vendors offering different products and services to these companies.
Each of these companies struggles with determining how to implement an effective cybersecurity program. This is made more difficult by vendors’ differing views on what is most important. On top of this, companies are dealing with internal and external requirements, such as PCI, SOX, HIPAA and GDPR.
The Center for Internet Security (www.cisecurity.org) offers a potential solution in the form of a framework for implementing an effective cybersecurity program. CIS defines 20 controls that organizations should implement when establishing a cybersecurity program. These controls fall into three categories:
- Basic – Six basic controls that every organization should address first. Implementation of solutions in these 6 areas forms the foundation of every cybersecurity program.
- Foundational – Ten additional controls that build upon the foundational elements. Think of these as secondary initiatives once your organization has established a good foundation.
- Organizational – Four additional controls that are that address organizational processes around your cybersecurity program.
Most organizations have implemented elements from some controls in the form of point security products. But many don’t recognize the importance of implementing the basic controls before moving on to the foundational controls – and their cybersecurity programs suffer. By organizing your efforts using CIS’s framework, you can significantly improve your company’s cyber defenses, while making intelligent decisions on the next area for review and improvement.
Although no single product can be the solution for implementing and managing all CIS controls, look for products that provide value in more than one area and integrate with your other security solutions. RedSeal, for example, is a platform solution that provides significant value in 7 of the 20 control areas and supporting benefit for an additional 10 controls. Additionally, RedSeal has pre-built integrations with many security products and easy integration with others via its REST API interface.
Download the RedSeal CIS Controls Solution Brief to find out more about how RedSeal can help you implement your program using the CIS Controls.
Some people are surprised that Heartbleed is still out there, 3 years on, as you can read here. What this illustrates is two important truths of security, depending on whether you see the glass half full or half empty.
One perspective is that, once again, we know what to do, but failed to do it. Heartbleed is well understood, and directly patchable. Why haven’t we eradicated this by now? The problem is that the Internet is big. Calling the Internet an “organization” would be a stretch – it’s larger, more diverse, and harder to control than any one organization. But if you’ve tried to manage vulnerabilities at any normal organization – even a global scale one – you have a pretty good idea how hard it gets to eliminate any one thing. It’s like Zeno’s Paradox – when you try to eradicate any one problem you choose, you can fix half the instances in a short period of time. The trouble is that it takes about that long again to fix the next half of what remains, and that amount again for the half after that. Once you’ve dealt with the easy stuff – well known machines, with well documented purpose, and a friendly owner in IT – it starts to get hard fast, for an array of reasons from the political to the technical. You can reduce the prevalence of a problem really quickly, but to eradicate it takes near-infinite time. And the problem, of course, is that attackers will find whatever you miss – they can use automation to track down every defect. (That’s how researchers found there is still a lot of Heartbleed out there.) Any one time you miss might open up access to far more important parts of your organization. It’s a chilling prospect, and it’s fundamental to the unfair fight in security – attackers only need one way in, defenders need to cover all possible paths.
To flip to the positive perspective, perhaps the remaining Heartbleed instances are not important – that is, it’s possible that we prioritized well as a community, and only left the unimportant instances dangling for all this time. I know first-hand that major banks and critical infrastructure companies scrambled to stamp out Heartbleed from their critical servers as fast as they could – it was impressive. So perhaps we fixed the most important gaps first, and left until later any assets that are too hard to reach, or better yet, have no useful access to anything else after they are compromised. This would be great if it were true. The question is, how would we know?
The answer is obvious – we’d need to assess each instance, in context, to understand which instances must get fixed, and which can be deferred until later, or perhaps until after we move on to the next fire drill, and the fire drill after that. The security game is a never-ending arms race, and so we always have to be responsive and dynamic as the rules of the game change. So how would we ever know if the stuff we deferred from last quarter’s crises is more important or less important than this quarter’s? Only automated prioritization of all your defensive gaps can tell you.
There’s a lot going on in virtual data centers. In security, we’re hearing many variations of the term “micro-segmentation.” (It originated from VMWare, but has been adopted by other players, some of them adding top-spin or over-spin.)
We know what segmentation is. Every enterprise network practices segmentation between outside and inside, at least. Most aim to have a degree of internal segmentation, but I see a lot more planning than doing — unless an audit is on the line. Many networks have a degree of segmentation around the assets that auditors pay attention to, such as patient records and credit cards. There are organizations further up the security sophistication curve who have a solid zone-based division of their business, can articulate what each zone does and what should go on between them, and have a degree – at least some degree – of enforcement of inter-zone access. But these tend to be large, complex companies, so each zone tends to be quite large. It’s simple math – if you try to track N zones, you have to think about N2 different relationships. That number goes up fast. Even well-staffed teams struggle to keep up with just a dozen major zones in a single network. That may not sound like a lot, but the typical access open between any two zones can easily exceed half a million communicating pairs. Auditing even one of those in full depth is a super-human feat.
Now along comes the two horses pulling today’s IT chariot: the virtual data center and the software defined network. These offer more segmentation, with finer control, all the way down to the workload (or even lower, depending on which marketing teams you believe). This sounds great – who wouldn’t want super-fine controls? Nobody believes the perimeter-only model is working out any more, so more control must be better, right? But in practice, if you just throw this technology onto the existing stack without a plan for scaling, it’s not going to work out.
If you start with a hard-to-manage, complex management challenge, and you respond by breaking it into ever smaller pieces, spread out in more places, you can rapidly end up like Mickey Mouse in The Sorcerer’s Apprentice, madly splitting brooms until he’s overrun.
Is it hopeless? Certainly not. The issue is scale. More segmentation, in faster-moving infrastructure, takes a problem that was already tough for human teams and makes it harder. But this happens to be the kind of problem that computers are very good at. The trick is to realize that you need to separate the objective – what you want to allow in your network – from the implementation, whether that’s a legacy firewall or a fancy new GUI for managing policy for virtual workloads. (In the real world, that’s not an either/or – it’s a both, since you have to coordinate your virtual workload protections with your wider network, which stubbornly refuses to go away just because it’s not software defined.)
That is, if you can describe what you want your network to do, you can get a big win. Just separate your goals from the specific implementation – record the intention in general terms, for example, in the zone-to-zone relationships of the units of your business. Then you can use automation software to check that this is actually what the network is set up to do. Computers don’t get tired – they just don’t know enough about your business or your adversaries to write the rules for you. (I wouldn’t trust software to figure out how an organism like a business works, and I certainly wouldn’t expect it to out-fox an adversary. If we can’t even make software to beat a Turing Test, how could an algorithm understand social engineering – still a mainstay of modern malware?)
So I’m not saying micro-segmentation is a bad thing. That’s a bit like asking whether water is a bad thing – used correctly, it’s great, but it’s important not to drown. Here, learning to swim isn’t about the latest silver bullet feature of a competitive security offering – it’s about figuring out how all your infrastructure works together, and whether it’s giving the business what’s needed without exposing too much attack surface.
Most people think about network infrastructure about as much as they think about plumbing – which is to say, not at all, until something really unfortunate happens. That’s what puts the “infra” in the infrastructure – we want it out of sight, out of mind, and ideally mostly below ground. We pay more attention to our computing machinery, because we use them directly to do business, to be sociable, or for entertainment. All of these uses depend critically on the network, but that doesn’t mean most of us want to think about the network, itself.
That’s why SEC Consult’s research into exploitable routers probably won’t get the attention it deserves. That’s a pity – it’s a rich and worthwhile piece of work. It’s also the shape of things to come, as we move into the Internet of Things. (I had a great conversation a little while ago with some fire suppression engineers who are increasingly aware of cyber issues – we were amused by the concept of The Internet of Things That Are on Fire.)
In a nutshell, the good folks at SEC Consult searched the Internet for objects with a particular kind of broken cryptography – specifically, with known private keys. This is equivalent to having nice, shiny locks visible on all your doors, but all of them lacking deadbolts. It sure looks like you’re secure, but there’s nothing stopping someone simply opening the doors up. (At a minimum, the flaw they looked for makes it really easy to snoop on encrypted traffic, but depending on context, can also allow masquerading and logging in to control the device.)
And what did they find when they twisted doorknobs? Well, if you’ve read this far, you won’t be surprised that they uncovered several million objects with easily decrypted cryptography. Interestingly, they were primarily those infrastructure devices we prefer to forget about. Coincidence? Probably not. The more we ignore devices, the messier they tend to get. That’s one of the scarier points about the Internet of Things – once we have millions or billions of online objects, who will take care of patching them? (Can they be updated? Is the manufacturer responsible? What if the manufacturer has gone out of business?)
But what really puts the icing onto the SEC Consult cake is that they tried hard to report, advertise, and publicize everything they found in late 2015. They pushed vendors; they worked with CERT teams; they made noise. All of this, of course, was an attempt to get things to improve. And what did they find when they went back to scan again? A 40% increase in devices with broken crypto! (To put the cherry onto that icing, the most common device type they reported before has indeed tended to disappear. Like cockroaches, if you kill just one, you’re likely to find more when you look again.)
So what are we to conclude? We may wish our infrastructure could be started up and forgotten, but it can’t be. It’s weak, it’s got mistakes in it, and we are continuously finding new vulnerabilities. One key take-away about these router vulnerabilities: we should never expose management interfaces. That sounds too trivial to even mention – who would knowingly do such a thing? But people unknowingly do it, and only find out when the fan gets hit. When researchers look (and it gets ever easier to automate an Internet-wide search), they find millions of items that violate even basic, well-understood practices. How can you tell if your infrastructure has these mistakes? I’m not saying a typical enterprise network is all built out of low-end routers with broken crypto on them. But the lessons from this research very much apply to networks of all sizes. If you don’t harden and control access to your infrastructure, your infrastructure can fail (or be made to fail), and that’s not just smelly – it’s a direct loss of digital resilience. And that’s something we can’t abide.
Last month, Wallace Sann, the Public Sector CTO for ForeScout, and I sat down to chat about the current state of cybersecurity in the federal government. With ForeScout, government security teams can see devices as they join the network, control them, and orchestrate system-wide responses.
Many of our customers deploy both RedSeal and ForeScout side by side. I wanted to take a look at how government security teams were dealing with ongoing threats and the need to integrate difference cybersecurity tools into the “cyber stack.”
Our conversation is lightly edited for better clarity.
Wayne: Describe the challenges that ForeScout solves for customers.
Wallace: We help IT organizations identify IT resources and ensure their security posture. There’s always an “ah-ha moment” that occurs during a proof of concept. We see customers who swear by STIG, and will say they only have two versions of Adobe. We’ll show them that there are 6-7 versions running. We tell you what’s on the network and classify it.
Wayne: We often say that RedSeal is analogous to a battlefield map where you have various pieces of data coming in to update the topography map with the current situation. By placing the data into the context of the topography, you can understand where reinforcements are needed, where your critical assets are and more.
RedSeal’s map gives you this contextual information for your entire enterprise network. ForeScout makes the map more accurate, adapting to change in real time. It lets you identify assets in real time and can provide some context around device status at a more granular or tactical level.
Wallace: Many companies I speak to can create policies on the fly, but ensuring that networks and endpoints are deployed properly and that policies can be enforced is a challenge.
Wayne: Without a doubt. We were teaching a class for a bunch of IT professionals, telling them that RedSeal can identify routes around firewalls. If the networking team put a route around it, the most effective firewall won’t work. The class laughed. They intentionally routed around firewalls, because performance was too slow.
Endpoint compliance typically poses a huge challenge too. RedSeal can tell you what access a device has, but not necessarily when it comes online. Obviously, that’s one of the reasons we’re partnering with ForeScout.
Wallace: ForeScout can provide visibility that the device is online and also provide some context around the endpoint. Perhaps RedSeal has a condition that DLP is running on the endpoint. ForeScout could tell you that DLP is not loaded, and therefore no access allowed.
Wayne: Inventory what’s there. Make sure it’s managed. If not managed, you may not know you were attacked and where they came in or went. If you have that inventory, you can prevent or at least respond quicker.
Another important component is assessing risk and knowing what is important to protect. Let’s say we have two hosts of equal value. If Host 1 is compromised, you can’t leapfrog any further. No other systems will be impacted. If Host 2 is compromised, 500 devices can be compromised including two that may have command and control over payroll or some critical systems. Where do you want to put added security and visibility? On the hot spots that open you up to the most risk! We put things into network context and enable companies to be digitally resilient.
Wallace: With so many security concerns to address, prioritization is critical.
Wayne: IoT is obviously a trend that everyone is talking about and is becoming an increasing concern for agency IT Security orgs. How is ForeScout addressing IoT?
Wallace: ForeScout provides visibility, classification and assessment. If it has an IP address, we can detect it. Classification is where we are getting better. We want to be able to tell you what that device is. Is it a security camera? A printer? A thermostat? We can classify most common devices, but we want to be 75-90% accurate in device classification. The problem is that many new devices are coming out every day. Many you can’t probe traditionally; it could take the device down. And, you can’t put an agent on it. So, we’re using other techniques to passively fingerprint a device (via power over Ethernet, deep packet inspection, and more), so we can get to 95% accuracy.
Wayne: Do you see a lot IoT at customer sites, and are they concerned?
Wallace: Some don’t realize they have an issue. Many don’t know that IoT devices are on their networks. We are seeing more cases where we are asked to assess IoT environments and address it. Before, we weren’t asked to take action. We used to be asked how many Windows and Mac devices there were. Now, there is a movement by government agencies to put anything with an IP address (the OT side) under the purview of the CISO.
Wayne: We see a lot of devices – enterprise and consumer – that aren’t coded securely. IoT devices should be isolated, not connected to your mission critical operating environment.
Wallace: I was curious how RedSeal handles IoT?
Wayne: If there is vulnerability scan data, it tells us what OS, applications running, active ports, host name, MAC address, etc. Without that data, we can grab some device data, but with ForeScout, can get more context/additional data about the device. ForeScout can tell you the devices are there. RedSeal can ensure that it’s segmented the way it should be. We can tell you it’s there and how you can get to it, people need to make decisions and act. We show IoT devices as a risk.
Wayne: What are some of the trends that you are seeing that need to be addressed at customer sites?
Wallace: From a native cloud perspective, we are working on extending the customer on-premise environment and bringing visibility and control to the cloud. We are also working on making it easier to get security products to work together. People don’t have the resources for integration and ongoing management. We’re working to orchestrate bi-directionally with various toolsets to provide actionable intelligence – advanced threat detection, vulnerability assessment, etc.
We can take intel from other vendors, and ForeScout gives us the who, what, when, where from an endpoint to determine if that device should be on a network.
For example, an ATD vendor can detect malware (find it in their sandbox). They will hand us an incident of compromise (hash, code, etc.). We’ll look for those IoCs on devices on the network and then quarantine those devices.
Wayne: Security vendors need to work together. Customers don’t want to be tied to a single vendor. Thanks for your time today.
When disaster strikes, the Federal Emergency Management Agency (FEMA) enterprise network is expanded to include “temporary” mobile data centers that can last from months to years. In this kind of situation, change control, network maps and configurations can get wildly out of control. The security engineers in FEMA’s Security Operation Center (SOC) wanted network visibility. What’s more, they needed continuous monitoring to be able to measure risk and make decisions about how to deploy their scarce time and resources.
After learning more about RedSeal’s security analytics platform, FEMA’s cybersecurity lead realized that it could fill a major void in the agency’s solution set. RedSeal could help him understand the network, measure resilience, verify compliance, and accelerate response to security incidents and network vulnerabilities.
The FEMA SOC team deployed RedSeal to help manage their change control process — by modeling the data centers as they popped up in near real time. As data centers come online, they use RedSeal to ensure the right access is available. In the coming months, the team is expanding use of RedSeal to support their incident response program.
FEMA’s network team also uses RedSeal, to visualize access from disaster sites. Initially, they were shocked by the level of network access sprawl. They had no idea how much gear was on the network at a disaster site or how many security consequences resulted from simple configuration changes.
Now, with RedSeal’s continuously-updated network model, the network team is able to identify everything on the network and rapidly address any configuration changes that cause security, performance, and network uptime issues.
Get a PDF of this article. FEMA: Modeling Network Access
Federal agencies are clamoring for information about best practices about to implement the findings of last year’s cybersecurity “sprint.” This new directive, the Cybersecurity Implementation Plan, is mandatory for all federal civilian government agencies. It addresses five issues intended to shore up agency cybersecurity and ensure network resiliency.
So when agencies are done with their implementation, all their networks and assets will be secure, right?
Most of the time the reality of your network and the official network diagram have little to do with each other. You may think it’s accurate…but it’s not.
Recently, I sat down with Jeremy Conway, Chief Technology Officer at RedSeal partner MAD Security, to talk about this. He works with hundreds of clients and sees this issue constantly. Here’s his perspective.
Wayne: Can you give me an example of a client that, because of bad configuration management, had ineffective security and compliance plans?
Jeremy: Sure I can. A few months back, MAD Security was asked to perform an assessment for an agency with terrible configuration management. With multiple data centers, multiple network topologies, both static and dynamic addressing, and multiple network team members who were supposed to report up the hierarchy, we quickly realized that the main problem was that they didn’t know their own topology. During our penetration test, we began compromising devices and reporting the findings in real time. The compromises were just way too simple and easy. The client disputed several of the results. After some investigation, we figured out that the client had reused private IP space identical to their production network for a staging lab network, something no one but a few engineers knew about. Since we were plugged into the only router that had routes for this staging network, we were compromising all sorts of unhardened and misconfigured devices. Interestingly enough, this staging network had access to the production network, since the ACLs were applied in the opposite direction — a whole other finding. To them and their configuration management solution, everything looked secure and compliant. But in reality, they had some major vulnerabilities in a network only a few folks knew about, vulnerabilities that could have been exploited to compromise the production network.
The client was making a common mistake — looking at their network situation only from an outside in perspective, instead also looking at it from the inside out. They didn’t have enough awareness of what was actually on their network and how it was accessed.
Wayne: That’s a powerful example. How about a situation where an agency’s use of software-defined or virtual infrastructure undermined their access control?
Jeremy: One hundred percent software defined networks are still rare in our world. However, we had a situation where virtual environments were spun up by the apps team, not the network team, which caused all sorts of issues. Since the two teams weren’t communicating well, the network team referenced network diagrams and assumed compliance. In reality, the apps team had set up the virtual environment with virtual switches that allowed unauthorized access to PCI data. Running a network mapping exercise with RedSeal would have identified the issue.
Wayne: I imagine that inaccurate network diagrams cause major issues when incident response teams realize that there hasn’t been any auto discovery and mapping of the network.
Jeremy: Yes, this is a must-have feature, in my opinion. When responding to an incident, you have to perform the network-to-host translations manually. Tracking down a single host behind multiple network segments with nothing but a public IP address can take a long time. In a recent incident with multiple site locations this took the client’s network team two working days — which really doesn’t help when you’re in an emergency incident response situation.
RedSeal makes it easy to find which host has been compromised and which path an intruder has taken almost instantaneously.
Moreover, conducting a security architecture review is much quicker and more comprehensive with RedSeal. This used to be a manual process for our team that typically took 2-4 weeks for the average client. RedSeal has cut that time in half for us. Additionally, with RedSeal the business case for action is stronger and the result is a better overall remediation strategy. How? For one, given an accurate map of the network, HVAs can be prioritized and a triage process can be deployed that allows security teams to focus scarce time and resources on priority recommendations. This visibility into the severity of security issues also allows teams to develop mitigation strategies for patch issues.
Wayne: Jeremy, this has been a great discussion. I hope you’ll come back and do this again.