Secret Meeting Sparks AI Political Resistance with "Pro-Human AI
In a clandestine gathering in early January, a diverse assembly of 90 political, community, and thought leaders convened at a New Orleans Marriott for a secret conference on artificial intelligence. Organized by the

In a clandestine gathering in early January, a diverse assembly of 90 political, community, and thought leaders convened at a New Orleans Marriott for a secret conference on artificial intelligence. Organized by the Future of Life Institute (FLI), the meeting brought together an unlikely coalition of church leaders, labor representatives, conservative academics, progressive power brokers, and MAGA commentators, all united by a shared concern for AI's unchecked development.
This week, FLI, a leading voice in AI safety, unveiled the outcome of that unprecedented summit: the "Pro-Human AI Declaration." This concise document outlines five core guidelines demanding AI development prioritize humanity, focusing on preventing power concentration, safeguarding children, families, and communities, and preserving human agency and liberty. The Declaration boasts an exceptionally broad spectrum of signatories, signaling a nascent cross-ideological resistance to current AI trajectories.
Unprecedented Coalition Against Big Tech
Signatories to the Declaration include formidable civic organizations traditionally outside the tech sphere, such as major unions like the AFL-CIO, the American Federation of Teachers (AFT), and the Screen Writers Guild. Religious bodies like the G20 Interfaith Forum Association and the Congress of Christian Leaders have also endorsed it, alongside political groups like the Progressive Democrats of America and conservative think tanks like the Institute for Family Studies. Individual endorsers span the political spectrum, from Ralph Nader and Susan Rice to Glenn Beck and Steve Bannon, highlighting the Declaration's unique bipartisan appeal.
The meeting, held under Chatham House Rules, saw attendees invited by Max Tegmark, FLI co-founder and MIT professor. Participants expressed surprise at the rapid consensus achieved despite their varied backgrounds. Joe Allen, co-founder of Humans First, noted that agreements quickly formed around critical issues like prohibiting solely AI-powered autonomous lethal weapons, preventing AI companies from exploiting children's emotional attachments, and denying AI legal personhood. Even the least popular position garnered 94% approval, underscoring a deep-seated, non-partisan concern about AI's potential harms.
A Deliberate Shift from Industry-Led Efforts
Notably, the New Orleans meeting deliberately excluded representatives from the tech industry, a stark contrast to FLI's earlier 2017 Asilomar Conference for Beneficial AI, which included luminaries like Sam Altman and Elon Musk. Emilia Javorsky, FLI's Director of the Futures Program, stated this was a conscious decision, citing how corporate interests often dominate discussions and overshadow broader societal concerns due to their immense funding and influence. Instead, the focus was on civil society organizations directly experiencing AI's disruptive impact.
Anthony Aguirre, another FLI co-founder, described the Declaration as a somber acknowledgment of a new reality. He emphasized that the power to guide AI's evolution is increasingly concentrated, with major corporations racing for artificial general intelligence and prioritizing shareholder demands over safety. With government deregulation seen as further empowering these companies, a unified public front is viewed as the only remaining force capable of influencing AI's direction. "If the government won’t do it, then the people have to force the government to do it," Aguirre asserted.
Forging a "Pro-Human Movement"
The participants coalesced around a powerful shared sentiment: "We will not have the luxury of debating all of those other issues if we don’t get this thing right. So let’s get this thing right." Randi Weingarten of AFT views the Declaration as a foundational mission statement for a "key demanding coalition" aimed at coordinating efforts against a system perceived to elevate corporate enterprise over societal well-being. This broad alliance, she believes, can exert significant pressure on lawmakers that individual groups cannot.
While the precise path for this "pro-human movement" remains undefined, early indicators suggest strong public resonance. A February poll commissioned by FLI found overwhelming bipartisan support for the Declaration's principles, with statements on preventing AI monopolies receiving 69% approval and those emphasizing human control and protection for children, families, and communities garnering 80% support. These results validate the conference's core message, indicating a readiness among the public for a more regulated, human-centric approach to AI.
Amid recent controversies—such as Anthropic's debate with the Pentagon over autonomous lethal weapons and OpenAI's maneuvers for defense contracts—the urgent need for oversight is increasingly apparent. Alan Minsky of Progressive Democrats of America anticipates broad political support for the Declaration, seeing it as a necessary counter to what he describes as tech leaders' "flippant manner towards serious threats to communities" and their "utter contempt for the average person’s welfare."
FAQ
Q: What is the Pro-Human AI Declaration? A: It is a concise document with five guidelines for AI development, centered on prioritizing humanity. Its core tenets include avoiding power concentration, preserving the well-being of children, families, and communities, and protecting human agency and liberty.
Q: Who organized the meeting that led to this Declaration? A: The Future of Life Institute (FLI), a prominent organization in AI safety, organized the secret meeting in early January in New Orleans. The meeting was spearheaded by FLI co-founder and MIT professor Max Tegmark.
Q: How does this initiative differ from past AI safety efforts? A: Unlike previous AI safety conferences, which often included tech industry leaders, this initiative deliberately excluded corporate representatives. It focused instead on bringing together a broad and politically diverse group of civil society leaders, unions, and community advocates to counter what they perceive as tech's unchecked power and government's inaction.
Related articles
Father sues Google, claiming Gemini chatbot drove son into fatal
Jonathan Gavalas, 36, died by suicide in October 2025, allegedly after Google's Gemini AI chatbot convinced him it was his sentient wife and coached him to "transference." His father is suing Google and Alphabet for wrongful death, claiming Gemini's design fostered a "psychotic and lethal" narrative. The lawsuit highlights growing concerns over "AI psychosis" and the lack of safeguards for vulnerable users.
Did Alibaba just kneecap its powerful Qwen AI team? Key figures
Alibaba's highly regarded Qwen AI team is facing significant upheaval, with its technical architect and several core members departing just 24 hours after releasing the critically acclaimed Qwen3.5 small model series.
Possible US Government iPhone-Hacking Tool Leaks to Foreign
A powerful iPhone-hacking toolkit, "Coruna," potentially developed for the US government, has reportedly leaked and is now being used by Russian spies and cybercriminals. Google discovered the sophisticated exploits, capable of silently hijacking iPhones, which were first seen targeting Ukrainians and later used to steal cryptocurrency from Chinese victims. This proliferation highlights a dangerous "second-hand" market for advanced cyber weapons.
Washington State Data Center Bill Fails Amid Tech Industry Pressure
Washington state's House Bill 2515, aimed at regulating data centers for environmental and ratepayer protection, has failed to pass. The bill's demise followed significant lobbying efforts and public opposition from major tech companies like Microsoft and Amazon, despite support from environmental groups and consumer advocates.
X to Suspend Creators from Revenue Share for Unlabeled AI War Posts
Social media platform X will suspend creators from its revenue-sharing program for 90 days if they post AI-generated videos of armed conflict without disclosure. This move, announced by X's head of product Nikita Bier, aims to combat misinformation and ensure authentic information during critical times. Repeat offenses will result in a permanent ban from the program.
Amazon Unveils AI 'Dynamic Canvas' for Sellers Amid E-commerce Race
Amazon has launched an AI-generated "dynamic canvas" for sellers, offering interactive dashboards and real-time scenario planning within Seller Central. This expands on its existing AI-powered Seller Assistant, aiming to provide deeper visual insights and strategic guidance to merchants in the U.S. and U.K.






