The Pro-Human AI Declaration

March 2026

Artwork: “The Red Vineyard”, Vincent van Gogh

The Pro-Human AI Declaration

 ⤶ Exit and return to the declaration.

New Poll: Americans Overwhelmingly Support Pro-Human Principles on AI

More than eight in ten U.S. voters back human oversight of AI systems, reject fast-and-loose development, and want AI companies held accountable for harms.
Published: March 2026 | Future of Life Institute

A new national survey of 1,004 likely U.S. voters (February 19-20, 2026) reveals where Americans stand on AI development and the principles that should guide it.

The findings show that voters overwhelmingly prefer a future that is pro-human in nature, and reject the AI race focused on human replacement.

Respondents were presented with pairs of opposing statements about the trajectory of AI and asked which they prefer. The results were remarkably clear: across the political spectrum voters reject the current paradigm of rapid, lightly regulated AI development. Instead they want humans to remain firmly in control, children protected from manipulative AI systems, and companies held legally accountable when AI causes harm.

These results come as policymakers grapple with how to govern increasingly powerful AI systems, and as leading AI companies push toward what they call "artificial general intelligence."

Below we summarize the key findings of this report. Click here to view the toplines.

Highlights

  • 80% of voters support keeping humans in charge of AI, with strong oversight, clear limits, and corporate accountability—versus just 10% who favor fast, lightly regulated development.
  • 77% believe AI must stay under human control, with people deciding what to delegate and retaining the ability to stop systems when needed—versus just 11% who prefer giving AI more independence for speed and scale.
  • 69% want to prevent AI monopolies and ensure benefits are shared broadly, not captured by a small group—versus 16% who believe concentration is natural and policy shouldn't punish size with antitrust or forced sharing.
  • 73% support protecting children and families from AI systems designed to create emotional attachment or dependency—versus 15% who feel AI should be allowed to serve as tutors, coaches, or companions without strict limits.
  • 72% believe AI companies should bear legal responsibility for harms, with clear safety standards and real oversight—versus 15% who want accountability focused narrowly on negligence and fraud, with light oversight and safe harbors.
  • 69% agree that superintelligence should be prohibited until there is broad scientific consensus it can be developed safely and controllably—versus just 9% who disagree.

Respondents were presented with competing framings of AI governance—one emphasizing human control and accountability, the other emphasizing speed and minimal regulation—and asked to choose.

They were also asked to rate their agreement with specific principles across five domains: human control, concentration of power, protecting human experience, human agency and liberty, and corporate accountability.


Bipartisan consensus on AI principles

Notably, these findings cut across partisan lines. The survey sample was weighted to reflect the 2024 electorate, with roughly equal numbers of Trump and Harris voters represented.

The pattern is consistent: while Harris voters show slightly stronger support for oversight and accountability, Trump voters still favor these positions by large margins.

On no issue tested did a majority of either partisan group choose the "fast development, minimal regulation" position.

This bipartisan consensus suggests that AI governance need not be a partisan issue. Voters of all political stripes want humans in charge, children protected, and companies held accountable.


Overall: Voters reject the "race to replace" approach to AI

When presented with two overarching visions for AI development, voters chose decisively.

Statement A described an approach where AI serves people, and does not replace them; where humans stay in charge, society prevents concentration of power, children and relationships are protected, privacy rights are strong, and companies are held accountable for harms.
80% support
Statement B described an approach where AI progress moves fast, heavy rules are avoided, safety comes mainly from engineering and real-world testing rather than regulation, and markets decide what works.
10% support

This 8-to-1 margin represents a clear public mandate: Americans want guardrails on AI development, not a race to the bottom.

And what’s more:

  • 89% of Harris voters chose the pro-human-control vision (Statement A);
  • 73% of Trump voters chose the pro-human-control vision (Statement A), and of those only 14% preferred the fast, lightly-regulated approach.

On the core question of human control versus fast development, both Democrats and Republicans overwhelmingly chose human control. The same pattern holds across all the specific principles tested.

The finding sits in stark contrast to the operating paradigm of leading AI companies, which openly declare they are racing toward increasingly powerful systems while lobbying against meaningful oversight.


Human control: Non-negotiable for most Americans

The survey tested voter attitudes on several dimensions of human control over AI systems. The results show overwhelming consensus.

On the core question of human control:

  • 77% agree that AI must stay under human control, with people deciding what to delegate and retaining the ability to understand what systems are doing and stop them when needed. Just 11% preferred giving AI more independence to enable speed and scale.

On specific control mechanisms:

  • 83% agree that powerful AI systems must have mechanisms allowing human operators to promptly shut them down.
  • 85% agree that humans should have the authority and capacity to understand, guide, limit, and override AI systems.
  • 76% agree that AI systems must not be designed to self-replicate, autonomously self-improve, resist shutdown, or control weapons of mass destruction.

On superintelligence:

  • 69% agree that development of superintelligence should be prohibited until there is broad scientific consensus that it can be done safely and controllably, with strong public support. Just 9% disagree.

This finding echoes earlier polling showing Americans want regulation or prohibition of superhuman AI systems. The public is not persuaded by industry claims that such systems are inevitable or that racing to build them is wise.


Protecting children and relationships from AI replacement and manipulation

The survey revealed strong support for protecting what might be called the "human experience" from AI encroachment—particularly when it comes to children.

On AI and children:

  • 77% agree that companies must not be allowed to exploit children or undermine their wellbeing through AI interactions that create emotional attachment or leverage.
  • 76% agree that AI companies should not be allowed to stunt children's physical, mental, or social growth or deprive them of essential developmental experiences.

On AI and relationships:

  • 73% chose the view that AI should not replace meaningful relationships (family, friends, faith, community) and that children deserve extra protection—versus 15% who felt AI should be allowed to serve as tutors, coaches, or companions if users want.
  • 72% agree that AI should not supplant the foundational relationships that give life meaning.

On safety testing and transparency:

  • 74% agree that AI chatbots should undergo pre-deployment safety testing for risks such as increased suicidal ideation, worsening mental health, and other known harms—similar to how drugs are tested.
  • 77% agree that AI-generated content that could be mistaken for human content must be clearly labeled.
  • 78% agree that AI should clearly identify itself as artificial and not claim experiences it does not have.
  • 74% agree that AI systems should not cause addiction or compulsive use through manipulation, sycophantic validation, or attachment formation.

These findings suggest the public sees AI's potential to manipulate vulnerable users—especially children—as a serious concern requiring regulatory action, and that a large majority do not want authentic human experiences replaced by AI.


Accountability: The public wants consequences

Voters strongly favor holding AI companies legally accountable for harms caused by their systems. They reject the idea that AI can serve as a shield against liability.

On corporate accountability:

  • 72% chose the view that AI builders must be responsible for harms, with clear safety standards and honest reporting—versus 15% who preferred focusing accountability only on "real negligence and fraud" with light oversight.
  • 71% agree that AI must not act as a liability shield that prevents those deploying it from being legally responsible.
  • 73% agree that developers and deployers should bear legal liability for defects, misrepresentation of capabilities, and inadequate safety controls.
  • 77% agree that if an AI system causes harm, it should be possible to determine why and who is responsible.

On oversight and standards:

  • 72% agree that AI development should be governed by independent safety standards and rigorous oversight.
  • 77% agree that highly autonomous AI systems should require pre-development review and independent oversight with genuine authority—not just industry self-regulation.
  • 72% agree that AI companies must not be allowed undue influence over the rules that govern them.

On criminal penalties:

  • 77% agree there should be criminal penalties for executives responsible for prohibited child-targeted AI systems or ones causing catastrophic harm.

On honest representations:

  • 83% agree that AI companies must provide clear, accurate, and honest representations of their systems' capabilities and limitations.

The message is clear: the public expects AI companies to be held to high standards, with real consequences for failures.


Conclusion

This survey reveals a significant gap between public preferences for a pro-human future and the current trajectory of AI development.

Leading AI companies have declared they are racing toward artificial general intelligence, while many actively work to prevent meaningful oversight. The public, by contrast, wants humans firmly in control, with strong guardrails, independent oversight, and real accountability.

The findings represent a clear mandate for policymakers: Americans want AI that serves people, not the other way around. They want development that is careful and accountable, not fast and reckless. And they want rules with teeth, not industry self-regulation.

The Pro-Human Declaration principles tested in this survey reflect these values. As AI systems grow more powerful, the question is whether governance will catch up to public expectations—or whether the gap between what Americans want and what they are being given will continue to widen.

Methodology: This survey was conducted online among 1,004 likely U.S. voters from February 19–20, 2026. The sample was weighted by gender, race, education, 2024 presidential vote, and age. Respondents were also weighted by performance on attention checks, which included questions about fictional politicians and policies. The margin of error is ±4.7 percentage points.
 ⤶ Exit and return to the declaration.
Endorse the Declaration

Organizations

Institute for Family Studies
AFL-CIO Tech Institute
The Congress of Christian Leaders
American Federation of Teachers
G20 Interfaith Forum Association
SAG-AFTRA
Project Liberty Institute
Progressive Democrats of America (PDA)
Blessed Mother Family Foundation
David’s Legacy Foundation
Parents RISE!
Center for Study of Responsive Law
Center for AI and Digital Policy
Future of Life Institute
Humans First
Center for Humane Technology
ControlAI
CivAI
Pause AI
Saving Ourselves Foundation Inc.
Center for Responsible Technology
The B Team
Fathom
Faith Matters
Human Change Foundation
Seismic Foundation
Servitium AI and Serviti Corp
Legal Advocates for Safe Science and Technology
Economic Security Project
Tech Oversight Project
Design It For Us
AI Ethics and Governance Institute
Organized Intelligence
Evitable
Encode
Tech Equity
The Alliance for Secure AI
Ethical Tech Project
Essential Information
Common Cause California
Public Citizen
Heat Initiative
Civilization Research Institute
Demand Progress Education Fund
OpenMined

Individual endorsers

Yoshua Bengio Professor, Université de Montréal, Turing Award Laureate

Steve Bannon Fmr Executive Chairman of Breitbart News; fmr chief strategist to President Donald Trump; Host of War Room podcast

Susan Rice Fmr U.S. National Security Advisor & Policy Advisor for President Obama; U.S. Ambassador to the United Nations; Rhodes Scholar

Glenn Beck Founder of Blaze media, radio host, TV personality, political commentator

Alan Minsky Progressive Democrats of America (PDA)

Walter Kim President, National Association of Evangelicals, board member, Christianity Today

Ralph Nader Consumer Advocate, Center for Study of Responsive Law, Presidential candidate

Daron Acemoğlu Nobel Laureate in Economics, MIT Institute Professor

Beatrice Fihn Nobel Peace Laureate, Founder of Lex International

Rev. Johnnie Moore, PhD President, The Congress of Christian Leaders

Margarita Louis Dreyfus Human Change Foundation, Owner and chair of the Louis Dreyfus Company group, founder of Human Change Foundation

Sir Richard Branson Founder, Virgin Group

Randi Weingarten President, American Federation of Teachers

Julianna Arnold Founding Member and Executive Director, Parents RISE!

Megan Garcia Blessed Mother Family Foundation

Joann Bogard Parents SOS

Michael Toscano Director, Family First Technology Initiative, Senior Fellow, Institute for Family Studies

Mike Kubzansky CEO, Omidyar Network, Professor of Computational Engineering, Rice University, Member: US National Academy of Engineering and National Academy of Sciences

Tomicah Tillemann President, Project Liberty Institute

Stuart Russell Professor of Computer Science, Berkeley, Director of the Center for Human-Compatible Artificial Intelligence (CHAI); Co-author of the standard textbook 'Artificial Intelligence: a Modern Approach'

Tristan Harris Co-Founder, Center for Humane Technology

Brendan Steinhauser CEO, The Alliance for Secure AI

Dawn Nakagawa President, Berggruen Institute

Mikhail Samin Executive director, AI Governance and Safety Institute

Jeffrey Bennett General Counsel, SAG-AFTRA

Joseph-Gordon Levitt Actor, Filmmaker, Founder, HITRECORD

Alyson Stoner Actress, dancer, and singer, SAG-AFTRA, known for Step Up, Camp Rock, and voicing Isabella in Phineas and Ferb.

Frances Fisher Actress, SAG-AFTRA, known for Titanic, Unforgiven, and Watchmen.

Anthony Aguirre Future of Life Institute

Max Tegmark Future of Life Institute

Clark Barrett Professor of Computer Science, Stanford

Moshe Vardi Professor of Computational Engineering, Rice University, Member: US National Academy of Engineering and National Academy of Sciences

David Autor Professor, Co-director, Stone Center on Inequality and Shaping the Future of Work, MIT Department of Economics,

Meredith Whittaker President, Signal Foundation

Emilia Javorsky Future of Life Institute

Jean Oelwang Founding CEO, Virgin Unite and Planetary Guardians

Andrea Miotti Founder and CEO, ControlAI

Marc Rotenberg Founder, Center for AI and Digital Policy

Malo Bourgon Machine Intelligence Research Institute

Michael Marinaccio Executive Director, Center for Responsible Technology

Kelly Monroe Kullberg General Secretary, American Association of Evangelicals (AAE)

Dylan Hadfield-Menell Associate Professor of Computer Science, MIT

Sharon Li Associate Professor of Computer Science, University of Wisconsin Madison

Vael Gates Humans in Control

Deger Turan Metaculus

Ed Newton-Rex CEO, Fairly Trained

Alison Rice Managing Director, Design It For Us

Brooke Istook Chief Impact Officer, Heat Initiative

Medlir Mema Founder and Director, AI Ethics and Governance Institute; Senior Fellow, Organized Intelligence

Vivian Dong Programs Director, Legal Advocates for Safe Science and Technology

Tegan Maharaj Assistant Professor in Machine Learning, Mila

David Krueger CEO, Evitable; Assistant Professor, University of Montreal, Mila

Roman Yampolskiy Professor, Computer Science and Engineering. Author, AI: Unexplainable, Unpredictable, Uncontrollable, UofL

Jillian Clare LA Board Member, Chair National Young Performer’s Committee, SAG-AFTRA

Nick Smoke Actor, SAG-AFTRA, known for "The Social Network"

Karen A. Brown Filmmaker/Actor, SAG-AFTRA/StardustBlue Media

Jesse Martinez Carlos Board Member, Los Angeles Local, SAG-AFTRA

Erik Passoja Co-Chair, LA New Technology Committee (2024-2025), SAG-AFTRA

Peggy Lane ORourke Actress, known from Seinfeld; SAG AFTRA Los Angeles Local Board Member, National Board Alternate

Rob Drake Out There Pictures

Joshua Hughes Greater Grace Christian Center

Cristine Legare UT Austin

Mark Brakel Future of Life Institute

Joe Allen Humans First

Alexandra Tsalidis Future of Life Institute

DZ Kalman Shalom Hartman Institute

David Hsu Senior Director of Programs and Policy, Omidyar Network

Bobby Halick Hit Record

Justin Bullock Americans for Responsible Innovation

Oliver Stephenson Federation of American Scientists

Daniel Bring American Affairs

Michael Kleinman Future of Life Institute

Prof. Sandra M Faber Prof. Emerita, University of California, Santa Cruz

Kate McCarthy Women's Media Center

Brett Puterbaugh The Church of Jesus Christ of Latter-day Saints

William Jones Future of Life Institute

Juliana Arnold Parent RISE!

Colin McGlynn Demand Progress Education Fund

Rabbi Geoff Mitelman Sinai and Synapses

John Unger FAITH Alliance—Fellowship Advancing Integrity in Technology & Humanity

David Haussler Professor, UC Santa Cruz

Beatrice Ekers Foresight Institute

Evan Davison Kotler Helena

Ari Rosenthal Torchbearer Community

Connor Leahy ControlAI US

Fr. Michael Baggot Associate Professor of Bioethics, Pontifical Athenaeum Regina Apostolorum

Sugheanmungol Sarin AI Safety Asia

Isabella Hampton Future of Life Institute

Joshua Tan Public AI

Jeremy Ornstein Center for AI Safety

Emma Ruby Sachs Eko

Philip Reiner Institute for Security and Technology

Sam Hiner Young People's Alliance

Lachlan Carroll Center for AI Safety

Riki Parikh Alliance for Secure AI

Christian F. Nunes President, Saving Ourselves Foundation Inc.

Brian Boyd Future of Life Institute

Chase Hardin Future of Life Institute

Dalia Hashad Future of Life Institute

Saheb Gulati Center for AI Safety

Lucas Hansen CivAI

Marianna Richardson G20 Interfaith Forum Association

Sacha Hayworth Tech Oversight Project

Shana Mansbach Fathom

John McElliot Servitium AI and Serviti Corp

Holly Elmore Pause AI

Sander Volten Seismic Foundation

Jaron Lanier Computer Scientist, Author

Lizzie Irwin Policy Communications Specialist, Center for Humane Technology

Valerie M. Hudson University Distinguished Professor, Texas A&M University (and the Aegix Institute)

Maurine Molak Founder, David’s Legacy Foundation

Ron Ivey Founder and CEO, Noēsis Collaborative

Kirk Doran, Associate Professor of Economics Faculty, University of Notre Dame

Maria S. Eitel Founder, Plan A

Geoffrey Miller Associate Professor of Psychology, University of New Mexico

Camille Crittenden Executive Director, CITRIS and the Banatao Institute, University of California

Joseph Vukov Associate Professor of Philosophy, Loyola University Chicago

David Evan Harris Chancellor's Public Scholar, University of California, Berkeley

Zachary Davis Co-Founder, Faith Matters

Seán Coughlan Director, To Zero

Andrew Broz AI Research & Strategy, Civilization Research Institute

David Brenner Co-Founder and Board Chair Emeritus, AI and Faith

Emilia Ismael Head of Communications & Operations, To Zero

Miki Yamashita Actor, SAG-AFTRA

Heather-Ashley Boyer Actor, Los Angeles Local Board Member, SAG-AFTRA

Anamitra Deb SVP, Programs and Policy, Omidyar Network

Catherine Bracy CEO & Founder, Tech Equity

Marie Fink Stunt Coordinator, SAG-AFTRA, Los Angeles National Board Member

Wes McEnany Future of Life Institute

Konstantine Anthony Councilmember, City of Burbank

Taylor Jones Design & Web Manager, Future of Life Institute

Stephen Casper AI researcher, Massachusetts Institute of Technology

Ron Ivey Noesis Collaborative

Kelly Kullberg American Association of Evangelicals, Veritas Forum

Ben Cumming Director of Communications, Future of Life Institute

Tristan Zucker Head of Operations, Humans First

Nancy Green Saraisky Executive Director, Ethical Tech Project

Anna Yelizarova Special Projects Lead, Future of Life Institute

John Richard President, Essential Information

Ryan T. Anderson President, The Ethics and Public Policy Center

Clare Morell Fellow, The Ethics and Public Policy Center

Lisa Gilbert Co-President, Public Citizen

Robert Weissman Co-President, Public Citizen

Vivian Dong Programs Director, Legal Advocates for Safe Science and Technology

Teri Olle Vice President, Economic Security Project

Brendan Bradley

Michelle Margolis Librarian, Columbia University

Colin McGlynn AI Policy Advisor, Demand Progress Education Fund

Sneha Revanur Founder & President, Encode

Heather Booth Organizer

Robert Creamer Partner, Democracy Partners

Brittney Gallagher Co-Founder, AI Objectives Institute

Rania Batrice Strategist, Founder and President, Batrice and Associates

Leah Seligmann CEO, The B Team

Hon Jeff Denham US Representative- CA 10 (2011-2019)

Andrew Trask Executive Director, OpenMined

Lawrence Lessig Roy L. Furman Professor of Law and Leadership, Harvard University

The Pro-Human AI Declaration
March 2026