Automating the Absurd: How AI Agents Will Scale Bureaucracy, Not Reform It
Without structural reform, the Department of Defense risks embedding inefficiency at scale with fewer humans left to challenge the system.
The promise of AI agents in government, particularly in national security, is seductive: tireless, scalable digital workers that can augment or replace overwhelmed civil servants. The Department of Defense is increasingly looking to AI as a force multiplier across acquisition, logistics, and operations.
But here’s the truth: you can’t automate your way out of a structurally broken system.
Legacy GOTS + AI = Faster Failure
Nowhere is this more apparent than in the DoD’s reliance on legacy Government Off-The-Shelf (GOTS) software. These systems, many built on 1990s architectures, are riddled with overlapping ownership lines. A single logistics application might be “owned” by a PMO, housed on a contractor-managed server, administered by a functional lead, and governed by policy written three administrations ago. No single entity is accountable but nearly everyone has veto power.
The DoD’s Acquisition & Sustainment office has noted repeatedly, this fractured landscape. They have championed software reforms precisely because the status quo enabled stovepiped systems and paralyzed modernization. Their push for “accredit once, use many” through initiatives like the Cybersecurity Maturity Model Certification (CMMC) was as much about breaking through process gridlock as it was about cyber hygiene.
AI agents, deployed in such an environment, don't liberate decision-makers. They just learn to operate within the dysfunction, surfing a sea of approval workflows, governance boards, and PDF checklists. They become overtrained bureaucrats, not digital innovators.
The Software Pathway Is a Step Not a Cure
The DoD’s new Software Acquisition Pathway (SWP) was created to streamline how code gets built, tested, and deployed which represents an acknowledgment that traditional acquisition models suffocate agile software delivery. It’s a welcome shift. But even within the SWP, AI agents are being proposed as “digital staffers” to write documentation, monitor metrics, or run programmatic health checks.
That’s helpful but only if the underlying documentation requirements, metric frameworks, and review cycles are worth preserving. In many cases, they’re not. The risk here is institutional: that AI is used to sustain outdated policy compliance rather than challenge its necessity.
DoD has repeatedly called for “moving at the speed of relevance.” But relevance isn’t about how fast you process outdated rules, it’s about whether those rules need to exist in the first place.
Red Tape at Machine Speed
Over time, DoD workflows have ossified into an architecture of compliance. Oversight that once served a purpose, such as cost control, risk mitigation, and fairness, has become detached from outcomes. Audit trails become more important than insights. Approvals become more about defensibility than decision quality. And each new EO, IG report, or NDAA provision adds another layer to the stack.
Injecting AI into this stack without pruning it first is like pouring jet fuel into a car stuck in reverse.
If we want to use AI to transform how it works, not just what it builds, it must begin by bringing clarity and ownership to the systems AI will support. Every digital agent introduced into a workflow should come with a clear designation of who is responsible for that system’s performance and outcomes, not a shared PMO or distributed governance structure, but a single accountable steward.
Second, the Department must treat AI deployment as an opportunity to ruthlessly simplify and modernize the policies behind the processes. That means conducting “zero-base” reviews of workflows before automating them, removing redundant compliance steps, eliminating legacy audit mandates, and streamlining approvals that exist only because “they always have.”
And finally, AI must be framed as a partner to human operators, not a substitute. Many of the most consequential decisions in defense such as strategic procurement, risk tradeoffs, and alliance management, require judgment, context, and political acumen that cannot be encoded in an algorithm. The goal is not to replace human discretion but to augment it with speed, visibility, and insight.
AI won’t save government from inefficiency unless we first save government from itself. Without that first step, the DoD won’t accelerate transformation, it will simply institutionalize the status quo at machine speed.