As cyber threats move faster and attack surfaces expand, resilience increasingly depends on how quickly agencies can see risk, make decisions, and take action. MeriTalk recently sat down with Egon Rinderer, senior vice president of global enterprise and public sector at NinjaOne, to discuss how agency cybersecurity strategies are changing to respond to this reality, how the priorities of the federal cyber strategy intersect, and why process change is essential to achieve unified visibility and full value from new technology.
MeriTalk: Federal cybersecurity priorities continue to evolve as agencies modernize systems and support more distributed, digital operations. What feels most different about the cyber landscape today than it did even a few years ago?
Rinderer: The speed of activity on both the defender and adversary sides is markedly different now than it was a few years ago. Agencies have access to far better tools than they did in the past, and improvements in acquisition and authorization processes are helping new technology get into government environments more quickly.
At the same time, the window between vulnerability discovery and adversary exploitation has narrowed from weeks to days, and in some cases to hours. Soon, the time to discover vulnerabilities will be nearly instantaneous. What used to be acceptable in terms of patching and response times is no longer aligned with the pace of the threat environment. So the challenge is not simply adopting better tools. It is adapting operations to a world where speed is now the norm.
MeriTalk: That brings to mind the new Mythos frontier model from Anthropic. News reports have said it can find and exploit vulnerabilities so quickly that the company doesn’t plan to release it publicly – at least not yet.
Rinderer: New capabilities can be scary, but they can be really powerful tools for the good guys. For example, now the developers building software will have the ability to find really obscure zero-days that used to take nation-state level resources to find. Getting a tool like Mythos into the hands of devs to get “left of bang,” so to speak, will result in dramatically improved outcomes. Use it to find and fix vulnerabilities up front so they’re not released in the first place.
MeriTalk: The March 2026 federal cyber strategy lays out six major priorities. Which of those do you think agencies will feel most immediately in their day-to-day operations?
Rinderer: Promoting common-sense regulation stands out to me. It is going to require some really heavy lifting around process re-engineering.
The federal government has made real progress in making it easier to onboard new technology. FedRAMP is a good example of that. It reduced a lot of drag around bringing modern tools into government. What has not gotten easier is putting that technology into use in a way that drives better outcomes.
That’s because new tools still have to fit into processes that were built decades ago for older technologies and slower operating conditions. For example, in vulnerability management and patching, tools can identify and remediate vulnerabilities almost immediately, but change management processes designed for the previous era can prevent teams from acting quickly. Over time, organizations end up serving the process, instead of the process driving the mission outcome it was originally designed to support. That is why this priority will be felt so directly. Agencies are going to have to do the difficult work of process re-engineering, not just technology modernization.
MeriTalk: One theme running through the strategy is the shift from periodic compliance to continuous visibility and remediation. What does that shift require from agencies in practical terms, and what tends to get in the way?
Rinderer: In practical terms, it requires agencies to stop thinking of continuous as simply ongoing and start treating it as real time. The technology to do that already exists. A unified endpoint management platform, for example, can surface vulnerability context, patch availability, and risk scoring in a single view, eliminating the handoffs that create delay. Agencies can monitor vulnerabilities with very low latency, understand whether known exploits are in play, determine whether patches are available, and assess the risk associated with applying them. Operators can have the context to decide whether to act immediately or pause.
What tends to get in the way is the process. Agencies still operate with patching and testing models that assume they need to evaluate every possible effect before taking action. That approach made sense when tools were less capable and the threat environment moved more slowly. The pace of change is simply too fast now to keep giving adversaries that kind of opportunity. This is one area where artificial intelligence – machine learning, in particular – can make a remarkably positive impact in reducing the burden of up-front testing through real-time data evaluation.
MeriTalk: The strategy addresses modernization, critical infrastructure, emerging technology, and workforce capacity. How are those issues coming together for federal cyber leaders?
Rinderer: They are inextricably intertwined. You cannot separate modernization, regulation, infrastructure security, emerging technology, and workforce capacity into neat categories and work to solve them one at a time. The outcomes agencies desire – shaping adversary behavior, building talent and capacity, and sustaining superiority – all depend on the other priorities moving with them.
That is part of what makes implementation so difficult. In most agencies, responsibilities are divided across organizations. To some degree, it’s going to require the agencies to disassemble and reassemble the teams a little bit, so they are not so stovepiped. They will need to have very cross-functional organizations put together to tackle these pillars and go after them holistically.
MeriTalk: As agencies work to put those priorities into practice, where can technology partners be most helpful?
Rinderer: It starts with being honest with the customer. For too long, vendors have been able to sell into government by putting on a good show, some theatrics during a proof of concept, some bold claims, and just enough evidence to suggest they can probably do what they say.
If a vendor demonstrates a capability in a proof of concept or proof of value, that capability should be innate to the product. If there is a large lag between buying the technology and getting value from it, the vendor has probably not been terribly honest about what the solution can really do.
Technology partners also need to engage productively with the partners that agencies rely on for process re-engineering. Beyond selling products, vendors need to make sure agencies get the value they are paying for.
MeriTalk: Thinking about endpoint operations in particular, what should agencies focus on now to build stronger security and resilience over time?
Rinderer: True IT consolidation and process modernization. Those are incredibly difficult to achieve, but not impossible. If you want to manage the totality of your compute ecosystem, you need a single data plane. You need a single view of reality, not seven or eight different views of part of reality that you’re mashing together. That’s never worked.
And look at the processes. Continuing to operate with outdated workflows means organizations risk throwing good money after bad. If agencies buy modern endpoint tools but force them to operate within legacy processes, they will not get the speed, efficiency, or resilience those tools can deliver.
Ultimately, the less time, effort, and money agencies spend on internal systems and process maintenance, the more capacity they have to focus on service delivery and achieving mission outcomes.