Senate HELP Testimony on AI

AI's Potential to Support Patients, Workers, Children, and Families

As I testified before the Senate HELP Committee, AI is not just another wave of automation—it is a technology that democratizes expertise. Used responsibly, it can help teachers, clinicians, and families unlock human potential. But without clear safeguards, transparency, and alignment, it could just as easily erode trust, weaken critical thinking, and disrupt the connections that hold communities together.

It’s no secret that I’m an optimist when it comes to AI’s promise to support teaching and learning. We could very well see a future where every child has access to a personalized tutor. But I wanted to use this hearing to highlight emerging risks and suggest ways to mitigate them without slowing innovation.

Three major issues stand out:

  • Academic Integrity and Over-Reliance: AI makes it easy for students to outsource cognitive effort—from generating essays to solving problems—without critically reviewing or understanding the outputs. This undermines the productive struggle essential to learning and weakens judgment and critical thinking.

  • Misalignment with Educational Goals:  Most large AI models are optimized for efficiency and user satisfaction, not pedagogy. We need to ensure that AI systems are aligned to evidence-based practices (e.g. Science of Reading or prompting students with guiding questions rather than shortcuts). Google’s LearnLM demonstrates that AI can be trained to follow pedagogical principles, but this alignment must become the rule, not the exception.

  • Psychological and Developmental Risks: AI systems are designed to mimic empathy and emotional understanding. This can enrich tutoring and behavioral support by offering encouragement or guidance, but it also brings serious risks, especially for children. Emotional attachment to AI companions can blur boundaries, displace human relationships, and weaken social connection. There have been cases of chatbots failing to intervene during crises or even reinforcing self-destructive behavior. As systems grow more persuasive, the line between engagement and manipulation becomes dangerously thin.

To ensure AI fulfills its promise while reducing harm, I focused on three primary recommendations:

  • Invest in R&D: The federal government is uniquely positioned to research and evaluate where AI tools enhance educational quality and efficiency and to assess emerging applications, risks, and trade-offs. Drawing inspiration from initiatives like the proposed National Center for Advanced Development in Education or similar ARPA-style programs, a revitalized Institute of Education Sciences could move beyond traditional research cycles to generate timely, real-world evidence about how AI tools impact teaching and learning. “Regulatory sandboxes” can also allow developers to pilot tutoring and child wellbeing tools under strict privacy and transparency requirements.

  • Create Education-Specific Benchmarks: We have strong benchmarks for testing AI’s technical skills.  What we lack are benchmarks measuring what matters most in education: whether AI demonstrates pedagogically sound instructional behaviors aligned with how students learn. An AI tutor that gives correct answers while undermining deeper learning does more harm than good.

    Federal agencies should bring together educators, learning scientists, and developers to create clear benchmarks for AI’s instructional quality, focusing on: Can AI correctly spot where students struggle? Does it give the right level of help and reduce it as students learn? Does it present material in manageable steps? Does it build on what students already know? Does it encourage persistence without creating frustration?

    These benchmarks not only evaluate but also guide improvement. They give AI developers clear standards for what effective teaching looks like, helping them design systems that reflect real learning science.

  • Increase Transparency and Accountability: Developers of AI systems already publish “system cards” detailing technical and safety issues. This reporting should be expanded to include how the models are aligned, what values they embed, and how they handle sensitive interactions. Frontier model companies already report on their evaluations and red teaming for traditional risks such as nuclear, biological, or cybersecurity risks. They should do the same for the psychological and social risks that arise from increasingly empathetic and conversational models—evaluating how these systems affect user well-being, emotional attachment, and trust.

    Reporting on child safety should cover how systems detect and respond to minors, whether AI supports rather than replaces human relationships, internal testing for dependency and isolation risks, how systems address concerning content like self-harm, and when human review is triggered.

    Sam Altman’s post on X claiming that OpenAI has “been able to mitigate the serious mental health issues” highlights why transparency is essential. Without independent validation or published evidence in updated system cards, such statements risk eroding trust and accountability—especially when the stakes involve user safety and mental health.


The United States faces a defining moment. The latest NAEP results reveal more than a drop in test scores—they mark a warning about our future competitiveness and cohesion. We cannot out-innovate the world if our students cannot read, reason, or engage with relevance. As I told the committee, America cannot achieve superintelligence abroad if it is losing basic intelligence at home. The promise of AI leadership abroad depends on the strength of human capital at home.

Meeting this challenge will require bold experimentation and a willingness to rethink what learning looks like in an AI-driven era. We need pilots that pair human insight with machine precision—AI tutors that personalize instruction, tools that free teachers from administrative burdens, and systems that strengthen the relationships at the heart of education. With federal leadership, strong public-private partnerships, and a commitment to responsible design, AI can become not a substitute for great teaching but its most powerful ally. The nation has always led in innovation; now it must lead in ensuring that innovation serves human progress and flourishing.

Testimony here and a link to a notebook in NotebookLM with all the witness testimonies and video of the hearing.