March 16, 2026
A new TechCrunch piece highlights the release of the Pro-Human AI Declaration, a document signed by a broad coalition of academics, former officials, and public figures calling for a far more restrictive approach to advanced AI development. Among other things, the declaration calls for a prohibition on superintelligence development until there is scientific consensus it can be done safely, mandatory off-switches on powerful systems, and a ban on architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown.
The goals behind that effort are understandable. People want AI to be safe and they want clear rules. Those are fair concerns, but this is still the wrong direction.
Pausing frontier AI development will not solve the problems its supporters claim it will solve. If anything, it risks making several of them worse. It would slow the research that helps us understand how these systems behave in practice and weaken America’s position at the exact moment our adversaries are investing heavily in advanced technology, computing infrastructure, and industrial deployment. We cannot hand hostile actors on the world stage a strategic edge. That is what would occur if we “paused AI”.
Safety and innovation are not opposites
One of the biggest problems with the declaration is that it treats progress and safety as if they are in conflict. They are not. In reality, much of the work to improve AI safety happens alongside development itself.
The National Institute of Standards and Technology has taken this approach. Its AI Risk Management Framework and Generative AI Profile have helped organizations identify, test, and manage risks as systems are built and deployed. These standards allow developers to evaluate models in real-world environments while continuing to improve them.
NIST has also expanded its broader testing and evaluation efforts to strengthen how AI systems are measured and assessed over time. That kind of standards-based approach recognizes an important reality: we learn more about how advanced systems behave by building, testing, and improving them than by freezing development altogether.
We already have more tools than pause advocates admit
The declaration’s supporters argue that Washington has failed to produce coherent AI rules. That is true in one sense. Congress has not yet passed a comprehensive federal AI framework. But that does not mean the United States has no guardrails.
Existing laws already apply to many of the most serious AI harms. In 2023, the FTC, DOJ Civil Rights Division, CFPB, and EEOC issued a joint statement making clear that existing legal authorities apply to automated systems just as they apply to other business practices. Those authorities already cover civil rights, consumer protection, fair competition, and equal opportunity. Separately, the FTC has used its existing unfair and deceptive practices authority to take action against deceptive AI claims and schemes through its Operation AI Comply enforcement sweep.
We are not starting from zero. The smarter approach is to enforce and refine those frameworks where necessary rather than layering entirely new regulatory regimes on top of them.
History offers a useful lesson. As Logan Kolas and Adam Thierer highlight in their latest report, “The AI Terrible Ten: The Worst State AI Policies and Four Better Models to Balance Safety and Innovation”, when the internet first emerged, the United States largely rejected precautionary restrictions and instead embraced a freedom-to-innovate model. That approach helped drive the explosion of digital innovation in the 1990s and created the modern technology economy we rely on today.
With AI, targeting harmful uses while allowing responsible development to continue is far more likely to produce both innovation and safety. That is the route we must take.
The better path is targeted, not frozen
If the goal is to reduce risk without losing America’s edge, the answer is not a blanket stop sign for frontier development. The answer is a smarter framework built around four principles.
1. Allow frontier development to continue
The United States should continue developing frontier models. Our adversaries are not pausing, so neither can we. If we voluntarily slow our AI labs while others continue to push forward, we are not ensuring safety, but actually putting ourselves at risk.
That is especially important because AI leadership is about far more than the models themselves. It is about the surrounding ecosystem of compute, talent, scientific research, infrastructure, and deployment capacity.
2. Restrict dangerous uses, not research itself
The better place to draw hard lines is around genuinely dangerous or unlawful uses. That includes fraud, cybercrime, illegal discrimination, and tools designed to facilitate serious harm. Existing federal agencies already have authority in several of these areas, and those authorities should be enforced aggressively. The point is to target bad conduct, not freeze broad categories of research that also power breakthroughs in science, medicine, infrastructure, and public safety.
That distinction matters. A policy regime aimed at bad actors and dangerous uses is very different from one that assumes the core problem is the existence of powerful models themselves.
3. Maintain federal visibility into the largest compute clusters
If policymakers are worried about the most advanced systems, then oversight should focus on the highest-capability end of the stack. Commerce already moved in this direction when BIS proposed reporting requirements for advanced AI models and large computing clusters. The proposal was framed as a national security and industrial base tool, not a blanket licensing regime. That is the right instinct. Federal visibility into the largest compute clusters is a far more sensible way to track frontier activity than trying to halt development outright.
That kind of oversight is targeted. It is rooted in actual capability thresholds. And it gives policymakers a way to monitor what matters most without smothering the rest of the ecosystem.
4. Use fast thresholds and clear reporting, not pre-approval bottlenecks
If additional safeguards are needed for the most advanced systems, they should be automatic, narrow, and workable. The best approach would be threshold-based reporting, transparency expectations, and evaluation triggers tied to capability or deployment context. We cannot build a slow permission structure that forces developers to wait for approvals before they can move forward.
The current administration’s own federal AI guidance emphasizes a balance of innovation, governance, and public trust, rather than a full stop approach. OMB’s M-25-21 directs agencies to accelerate federal AI use while applying minimum risk management practices for high-impact systems. That is a more practical model than trying to pause the frontier itself.
Speed and responsibility can coexist.
Off-switch mandates sound simple, but reality is not
Mandatory off-switches may sound appealing in theory, but the issue is more complicated in practice, especially for systems deployed in critical workflows or infrastructure environments. Sudden shutdown requirements can create technical fragility if they are imposed clumsily or treated as a substitute for better system design, monitoring, and containment. The stronger approach is to invest in robust evaluation, fallback planning, and operational controls that match the context in which a model is used. NIST’s TEVV work and ARIA pilot are both aimed at improving exactly this kind of measurement and evaluation capacity.
That is the broader point. Good safety policy should make systems more reliable and more understandable. It should not discourage development of useful systems simply because they are advanced.
America needs a framework that matches the moment
The frustration behind the Pro-Human AI Declaration is real. Washington still lacks a durable federal framework for AI. A pause-first agenda is not the answer. It risks slowing the companies, researchers, and developers who are building the future here while giving our adversaries more room to move.
We have a better path.
Let frontier development continue.
Go hard after dangerous uses.
Maintain federal visibility into the largest compute clusters.
Use fast thresholds, reporting, and targeted safeguards instead of slow approvals and blanket prohibitions.
Enforce the laws already on the books and update them where needed.
That is the kind of framework that protects the public without undermining innovation that keeps America strong.
The race for AI is not just a race for better tools. It is a race to shape the future of economic power, national security, and human progress. America should meet that moment with confidence, not retreat.
Jay Burstein is a fellow with Build American AI.