The Missing Piece in Federal AI Policy: AI Sovereignty Through Government-Owned Rule Layers
How rule ownership turns AI policy into enforceable governance
By: Paul D. Rempfer
Federal AI policy is moving fast, and federal agencies are adopting artificial intelligence (AI) at a pace we have not seen before. What remains unresolved is AI sovereignty: who controls how AI systems behave once they are deployed inside federal missions. That missing piece is not about adoption. It’s about governance after deployment, when AI starts shaping real decisions inside real workflows.
After more than 25 years working at the intersection of cyber, intelligence, and national security, I’ve learned to watch the gap between technology and governance. The failures don’t usually show up on day one. They surface later as trust breaks, legal exposure, and systems that can’t be explained or defended when it matters.
Washington is clearly signaling acceleration. Office of Management and Budget (OMB) memoranda (M-25-21 and M-25-22) emphasize scaling, enabling infrastructure, and streamlining compliance so agencies can move beyond experimentation. The December 2025 Executive Order on national AI policy pushes toward federal primacy and a unified national framework, while rejecting externally imposed behavioral mandates on models.
What those actions don’t settle is the internal question: once an AI system is embedded in a federal workflow, who controls how it behaves? That unresolved issue is where most of the real operational risk lives now.
In my recent white paper, “Rule Ownership is Sovereignty in the Age of Artificial Intelligence”, I lay out a practical way to preserve AI sovereignty through government-owned rule layers, deterministic execution, and independent oversight without slowing adoption or handing authority to vendors.
Before we talk about architectures or oversight, we need to name the operational reality: AI is already inside the workflow.
AI Is Already in the Operational Bloodstream
This is no longer a theoretical debate. AI is already inside federal workflows. Agencies are using large language models (LLMs) to summarize intelligence reporting, support cybersecurity analysis, accelerate research, assist with benefits adjudication, and draft administrative and regulatory materials.
Reporting over the past year shows agencies building secure, internal GPT-style tools tailored for mission use. The Cybersecurity and Infrastructure Security Agency created CISAChat. The Army deployed CamoGPT at scale. U.S. Central Command developed CENTGPT using Air Force code. The National Institutes of Health is using secure versions of Copilot and ChatGPT Enterprise to support research and operations.
Once AI reaches this stage, model performance alone is no longer the central issue; authority is. And authority is the foundation of AI sovereignty.
Where Authority Actually Lives in AI Systems
Most discussions about AI governance still focus on models. Which foundation model is used? Where is it hosted? Does it meet baseline security requirements? In real systems, those questions miss where decisions are actually shaped.
Model AI systems behave according to rules layered on top of the model: system instructions, refusal logic, escalation thresholds, tool permissions, and routing logic. These are the controls that determine what the system will do when it encounters ambiguity, risk, or real-world consequences.
These rules also define how AI interprets statutes, handles ambiguity, escalates to humans, and accesses data. Two agencies, for example, can use the same underlying model and get very different behavior depending on those rules. If those rules are embedded inside vendor alignment systems, informal prompt files, or undocumented configuration, federal agencies may technically “use AI” without truly controlling how it behaves. This creates a quiet transfer of AI sovereignty away from the government and toward vendors or internal teams operating without enterprise oversight.
A Fragmented Oversight Reality
This problem is made harder by the policy environment agencies are operating in. The Government Accountability Office (GAO) has documented a federal AI landscape with dozens of AI-related requirements spread across statutes, executive actions, OMB guidance, and oversight bodies. The requirements exist, but they are distributed, overlapping, and difficult to operationalize consistently.
In practice, agencies are left to interpret how these obligations apply to real systems, define what “compliant” behavior looks like, and prove that systems behave consistently over time. That burden increases as AI systems become more complex and more operationally embedded.
Why Self-Governance Is Not Enough
Agencies are under real pressure to reduce backlogs, move faster, and still show progress. These pressures don’t require bad intent to shape outcomes. Over time, rule logic drifts. Escalation thresholds get relaxed. Logging becomes selective, and vendor updates change behavior without mission-specific review.
This is not unique to AI. Financial regulators have spent years developing model risk management frameworks because automated decisions affect capital, stability, and trust.
They know model risk cannot be managed by the same teams that benefit from the models’ outputs. Independent oversight is what keeps governance from eroding under operational pressure.
AI is now reaching that same point inside government.
Why GAO Is the Logical Anchor
If AI rule governance is going to work at the federal scale, it needs an institution that is structurally independent, technically capable, and trusted across administrations.
In the white paper, I outline a proposal for a GAO-centered model of AI rule oversight that focuses on what actually governs system behavior: the rules themselves. That includes access to rule bundles, version history, compiled instruction sets, logs, and replay environments that allow auditors to reproduce system behavior exactly as it occurred.
This is not about slowing AI adoption. It’s about making sure AI systems remain lawful, predictable, and defensible as they scale.
The Missing Layer
Once the federal government asserts authority over AI policy, it also assumes responsibility for AI sovereignty inside its own missions. That responsibility can’t be satisfied through procurement language or vendor assurances alone. It requires government-owned rules, enforced through technical controls, and protected by independent oversight.
That’s the missing layer.
If you want the technical detail, the legal grounding, and a concrete roadmap for how AI sovereignty can be implemented in practice, I encourage you to read the full white paper.