The AI Action Plan: Securing America’s Future in the Age of Intelligence

Posted in AEI Ideas Part I and Part II.

The AI Action Plan: Securing America’s Future in the Age of Intelligence

The White House released its 2025 AI Action Plan—a 28-page blueprint focused on securing US leadership in artificial intelligence. It’s an executive-led strategy with minimal reliance on Congress, emphasizing rapid deployment and national competitiveness.

The plan is organized around three pillars: innovation, infrastructure, and international engagement. Each pillar underscores how AI is not just a technology, but a strategic asset in a new era of global competition.

Accelerate AI Innovation: The plan calls for investments in federal R&D for foundational AI models and expands access to secure testbeds for real-world deployment. It supports open-source and open-weight model development. It also expands AI-related workforce initiatives, including K-12 education, reskilling, and apprenticeships.

Build AI Infrastructure: The plan aims to remove chokepoints in physical infrastructure by fast-tracking permitting for data centers, semiconductor facilities, and energy projects, primarily through expanded exclusions in the National Environmental Policy Act and streamlined environmental review. It also calls for a national strategy to modernize the power grid and embraces next-generation energy sources like nuclear fission, fusion, and enhanced geothermal to meet the soaring demand for AI technologies.

Lead in International AI Engagement: Promotes US-developed AI models, hardware, and standards abroad and tasks federal agencies to engage in diplomatic and standard-setting bodies to “vigorously advocate for international AI governance approaches that promote innovation, reflect American values, and counter authoritarian influence.” It also calls for national security evaluations of emerging AI capabilities, led by the Center for AI Standards and Innovation.

Several key policy actions stand out and merit particular attention:

Open as a National Priority: One of the most striking departures from the Biden administration is the embrace of open-source and open-weight models as a strategic imperative, an issue I wrote about last year. The plan argues that models “founded on American values” should lead the way. This is especially important in the Global South, where open systems will likely be the primary means of AI access. Restricting these models unnecessarily risks ceding influence to Chinese alternatives that may carry embedded authoritarian values and security risks. Meta’s Mark Zuckerberg has argued this point, and OpenAI’s Sam Altman wrote, “The challenge of who will lead on AI is not just about exporting technology, it’s about exporting the values that the technology upholds.”

Regulatory Sandboxes: This is one of the plan’s most underappreciated strengths. Regulatory sandboxes provide a practical approach for researchers, startups, and companies to rapidly deploy and test AI tools in controlled environments, while committing to transparent data sharing and results. These environments not only help regulators build their technical understanding of new technologies but also surface risks before they scale.

One early example is the Coalition for Health AI, which brings together a diverse group of stakeholders to develop best practices and frameworks for the responsible use and implementation of AI in healthcare. The blending of government and industry expertise facilitates the exploration of key questions surrounding the quality of AI tools and develops a shared understanding of these technologies and their uses in different areas of healthcare.

But more experimentation is needed—both in coalition design and in sector focus. Establishing sandboxes in areas like education and child development could offer vital insights into how AI supports tutoring and workforce readiness, while also assessing potential negative effects (particularly AI companions) on social connection, child well-being, and developmental outcomes. 

Human Capital: As with energy and compute, AI runs on people: Human capital is the engine of AI competitiveness. The plan encourages federal agencies to prioritize AI skill development across education and workforce funding streams, with a focus on apprenticeships and industry-recognized credentialing programs.

It also highlights the Department of Labor’s AI Workforce Research Hub as a key initiative to evaluate AI’s evolving impact on the labor market. Debate continues over whether AI will supercharge productivity or lead to widespread job displacement. As the impacts of emerging technologies on employment are difficult to forecast, the plan emphasizes the need for improved, near-real-time labor market data to track adoption trends and inform timely policy responses. 

The bottom line: This is not a risk-first governance plan. As James Pethokoukis reflected, it is “proactionary rather than precautionary.” It’s a pro-growth, national security-driven roadmap that seeks to accelerate US deployment and strengthen domestic capacity. The challenge ahead will be translating executive ambition into institutional action and doing so in ways that sustain public trust while keeping pace with rapidly evolving AI capabilities.

America’s AI Action Plan: What to Watch

The Trump administration’s AI Action Plan outlines bold steps to accelerate innovation and boost US leadership in AI. My recent post highlights some of the needed proposals to cut red tape, streamline permitting, and spur private-sector growth. 

Yet for all its ambition, the plan overlooks several high-stakes gaps—areas where evolving risks may require more proactive federal involvement.

Interpretability and Model Behavior: One of the most unsettling realities about today’s advanced AI systems is that we don’t fully understand how they work—not even those who design them. Take this opening paragraph from an Anthropic blog post:

We mostly treat AI models as a black box: something goes in and a response comes out, and it’s not clear why the model gave that particular response instead of another. This makes it hard to trust that these models are safe: if we don’t know how they work, how do we know they won’t give harmful, biased, untruthful, or otherwise dangerous responses? How can we trust that they’ll be safe and reliable?

Unlike traditional software, which is explicitly programmed by humans, large language models (LLMs) learn by identifying patterns in massive datasets. This creates systems that are powerful but opaque, with internal reasoning that’s difficult to interpret or predict. That opacity isn’t just a technical quirk—it’s a real risk. If we don’t understand how these models generate their outputs, we can’t anticipate when they might act unpredictably, deceive users, or cause harm in high-stakes environments. Some of this is theoretical, but a growing body of research—not from AI doomers, but from researchers supporting responsible progress—has documented troubling behavior from the LLMs, including deception and scheming.

Researchers from top AI labs including Google, OpenAI, and Anthropic released a paper warning that we may be losing the ability to understand advanced AI models. Models trained for results, rather than transparent reasoning, are slipping into dense, unintelligible shortcuts that humans can’t easily interpret. As our visibility into model internals fades, so does our ability to evaluate them or intervene effectively. 

Though the plan acknowledges this, the recommendations are modest given the potential risks.  If we’re going to accelerate deployment, we must also accelerate our understanding. The plan’s nod to a partnership between the Defense Advanced Research Projects Agency, the Center for AI Standards and Innovation, and the National Science Foundation is a start, but it lacks the urgency and scale this issue demands. Governing opaque, increasingly autonomous systems requires a dedicated national effort that prioritizes interpretability as a core pillar of AI safety and security. 

State-Federal Tensions: One provision suggests that federal agencies may restrict AI funding to states with “burdensome” regulations. While framed as anti-red-tape, it echoes Race to the Top-style conditional funding and could become a flashpoint depending on how “burdensome” is defined. Conservatives have long pushed back against federal efforts that tie funding in a coercive way to additional policy compliance, from Common Core education standards to Obamacare’s Medicaid expansion. In each case, the concern wasn’t simply about the policies, but about Washington using financial leverage to override state autonomy. This AI provision, if not carefully scoped, could trigger similar backlash. 

Copyright: Despite its growing legal and commercial implications, the plan doesn’t mention copyright or content provenance—an odd omission given the litigation reshaping the AI landscape and the training of next-generation frontier models. Some of the most pressing issues include whether the use of copyrighted materials to train AI models qualifies as fair use, whether AI-generated content can infringe on existing works, and who owns the output of generative AI systems. Courts are wrestling with questions that may determine the future of AI development, such as whether scraped content from artists, authors, and news organizations can be used without permission. The executive branch could take proactive steps ranging from issuing guidance on fair use boundaries for training data, setting standards for content provenance and attribution, or launching a public comment process to develop a balanced licensing framework. 

Implementation Challenges: While the plan’s ambition is evident, its real-world impact hinges on execution, and here, the details are thin. Roughly one-third of the recommended actions lack a designated lead agency. No implementation timelines are offered, nor is it clear whether any new funding or resources will be made available. A pressing concern is whether agencies have enough technically skilled staff to execute the plan’s complex and consequential provisions.

The plan is a meaningful step toward accelerating American leadership in AI. But leadership won’t be measured just by how quickly we move, but by whether we move wisely, building systems the public can trust, and institutions prepared to govern them.  With sustained focus and deliberation, we have an opportunity not just to shape AI’s trajectory, but to do so in a way that earns public confidence and reflects our highest values.