1. For individual donors: focus on filling institutional gaps

Note: I recommend reading Section 1 as it was published standalone on Substack due to formatting problems I had with pasting below. Better formatted versions of Sections 2 and 3 are also available here.

In one sense, AI safety is not funding-constrained. There are hundreds of millions of dollars flowing into research orgs, more coming with the Anthropic IPO, and grantmakers are already struggling to deploy what they have.

And yet: the AI industry spent over $100 million lobbying the federal government in 2025, while the biggest AI safety c4 spent $310,000.

The highest-leverage path for donors looking to give to AI safety right now is to give less money, but to more neglected destinations.

Specifically, you should strongly consider routing what would have been a c3 gift to a c4 instead, calibrated to equivalent personal cost.


c3 vs c4: What's the difference?1

501(c)3 organizations are very limited in terms of the amount of money and time they can spend on lobbying and political advocacy-shaped work. They're also strictly prohibited from participating in political campaigns or supporting candidates:

Under the Internal Revenue Code, all section 501(c)(3) organizations are absolutely prohibited from directly or indirectly participating in, or intervening in, any political campaign on behalf of (or in opposition to) any candidate for elective public office… Violating this prohibition may result in denial or revocation of tax-exempt status and the imposition of certain excise taxes.

—IRS.gov

Donations to 501(c)(3)s are tax-deductible, but the trade-off is that these organizations can't do much by way of direct political advocacy— unless they're willing to put their c3 status at risk of being revoked entirely.

In contrast, 501(c)(4)s can spend unlimited amounts on lobbying and engage in direct political advocacy, as long as their primary purpose remains social welfare. Donations to c4s aren't tax-deductible, but they have far more freedom to engage in the political process.


Why does it matter?

AI safety 501(c)(3) work is substantially less funding-constrained than 501(c)(4) work. As a result, the marginal dollar spent on lobbying and political advocacy is more important than the marginal dollar spent on research.

Here are some examples of grants given to organizations in the AI safety space last year:

  • FAR.AI, a research nonprofit, secured north of $30m from several different funders to scale their technical safety research.
  • Redwood Research received a $36,566,000 grant from Coefficient Giving to advance their work on AI control and alignment faking.
  • Bluedot Impact, a talent accelerator program, received $25,649,888 in general support from CG.
  • MATS, an AI safety research fellowship, received nearly $40m from CG.

All of the above are 501(c)3 organizations.

Let's zoom in on CG for a moment, since it's the biggest funder in the field. Their TAI fund appears to have made 158 grants in 2025.2 Of the 158 listed, a grand total of three appear to have been for policy work: RAND Corporation ($2,000,000), Institute for AI Policy and Strategy ($11,510,081), and Training for Good ($461,069).

Notably, none of the above organizations can engage in substantial lobbying or advocacy work. RAND is a c3, IAPS is fiscally sponsored by a c3, and Training for Good received funding earmarked for a policy fellowship.

In researching CG's grant database, I did find one notable c4 bet: $3M in general support to Americans for Responsible Innovation in August 2024. ARI is bipartisan, which may explain why it cleared CG's reputational bar. It's a good start, but it appears to be an exception rather than a pattern. Benjamin Todd's January 2025 post on 80,000 Hours noted that CG had recently stopped funding "many Republican-leaning think tanks, such as the Foundation for American Innovation," and I haven't been able to determine whether their support for ARI has continued.

(I encourage you to spend some time on Coefficient Giving's Navigating Transformative AI Fund grant database and examine the grants yourself to verify the c3/c4 asymmetry.)


So a bunch of c3 organizations in the AI safety space received tens of millions of dollars each last year. Meanwhile, the AI industry was deploying its own dollars in a very different way:

  • Registered lobbying firms earned "almost $92 million in the first three quarters of 2025" from AI-related issues alone.
    • "More than one in four federal lobbyists are now pushing AI-related agendas, according to a new report from Public Citizen—and they are overwhelmingly working for corporate interests seeking to influence federal AI policy, or block state rules over the industry."
    • "Over 500 organizations have lobbied the White House and Congress on artificial intelligence policies in the first half of 2025"
  • The asymmetry is staggering:
    • On the industry side: OpenAI spent $2.99 million on lobbying in 2025— up from $260,000 in 2023. 11 Big Tech companies spent over $105 million on federal lobbying in 2025. Beyond lobbying, AI companies donate hundreds of millions to super PACs. Leading the Future alone has a $125 million war chest, funded heavily by OpenAI's president Greg Brockman.
Worth a callout: $320,000 per day Eight of the largest tech, AI, and social media companies spent a combined $36 million on federal lobbying during the first half of 2025 — an average of roughly $320,000 per day that Congress has been in session.
—Issue One
  • On the safety side: The CAIS Action Fund spent $310,000 on lobbying in all of 2025. As of Q1 2024, CAIS Action Fund and Center for AI Policy had a combined 10 registered lobbyists between them (that was, of course, before CAIP shut down). Public First Action, a c4 focused on lobbying for safety legislation, received a $20m contribution from Anthropic (less than a sixth of Leading the Future, and the money "isn't allowed to be used in the midterm battles"). AnthroPAC, Anthropic's new safety-aligned PAC, runs entirely on voluntary donations from employees, capped at $5k per person, per year.

Lobbying works, which is why the AI industry spends so much on it:

  • In 2024, SB-1047 was vetoed in California after intense industry lobbying.
  • In December 2025, Trump signed an executive order to thwart state-level AI regulation.
  • After Nvidia spent $4.95M lobbying the federal government in 2025, seven times what it spent in 2024:
    • The Trump administration substantially weakened export controls, allowing both H20 and H200 chip exports to China.
    • The bipartisan GAIN AI Act, which would have required chipmakers to fulfill U.S. orders before selling abroad to "countries of concern," passed the Senate in October as part of the NDAA— but was killed in conference after Nvidia lobbied against it.

Right now, the industry is winning the legislative war because the safety side is not putting up a fight.


Research vs. Lobbying

Technical safety research only matters if frontier labs actually implement it. Even if Anthropic adopts the latest alignment techniques voluntarily, other companies probably won't if it slows them down or cuts into margins.

Without legislation, every safety advance is effectively optional— a recommendation companies can take or leave depending on competitive pressure.

Legislation is what makes technical safety advances mandatory for frontier labs. It's also what buys time. Every regulation that imposes a meaningful evaluation requirement, every disclosure mandate, every liability framework, every whistleblower protection policy slows the race and gives safety researchers more runway to solve the problems we don't yet know how to solve.


The Pitch: Same Money, Different Structure, More Impact

While we have tens of millions of dollars pouring into major organizations in the c3 space, we don't have nearly enough funding going towards political AI safety work. The gap between funding for research and funding for advocacy is enormous.

Part of this is due to the concentration problem: currently, the vast majority of AI safety funding flows through a handful of institutional funders. Some of those funders (including the vast majority of private foundations,3 most DAFs, and corporate matching programs) are structurally barred from significant C4 giving.

Others, including CG, can technically fund C4s but face reputational constraints that make them cautious about politically charged advocacy, especially work that opposes specific industry players. As a result, the entire field has inherited a bias toward C3 research over C4 advocacy.

Individual donors face neither constraint. This means the people who can actually fix this gap are individual donors, who are the only actors in the ecosystem without structural or reputational barriers to C4 giving.


Some math

A top-earning donor giving $1M to a c3 actually spends ~$650K after the federal deduction (capped at 35% under the One Big Beautiful Bill Act, effective 2026).

This means that $650K, given directly to a c4, represents an equivalent personal cost with arguably greater counterfactual impact.

The Formula

C4-equivalent = X − [(X − floor) × effective deduction rate]

Where:

X = intended c3 gift

floor = 0.5% of Adjusted Gross Income (a new rule under the One Big Beautiful Bill Act says you can't deduct first 0.5% of AGI donated to c3 organizations)

effective deduction rate = at a federal level, capped at 35% for top-bracket donors

Important: State taxes may decrease the C4 equivalent, especially in higher tax states

So, for a donor with $5M AGI giving $1M to a c3, these are roughly the numbers:

Floor = $25,000 (non-deductible portion of c3 gift)

Deductible portion = $975,000

Tax savings = $975,000 × 35% = $341,250

Net personal cost of c3 donation = $658,750

This is the equivalent that should be donated to a c4.


The floor: important to know for mid-tier donors

Starting in 2026, the first 0.5% of a donor's AGI in charitable giving confers zero federal tax benefit under the One Big Beautiful Bill Act's new deduction floor.

For a donor whose total c3 giving falls at or below that floor, c3 and c4 donations will cost exactly the same, because the amount you're actually donating is equal to your personal cost (there is no deduction). In this instance, the tax case for c3 donations disappears entirely,4 and c4 wins on any positive impact multiplier.

As an example, let's say we have an Anthropic engineer with $3M AGI who gives $15K to AI safety. The floor is $15K, so zero of that engineer's $15K gift is deductible, meaning c3 and c4 are tax-identical.


Caveats

I am not your financial advisor and would recommend discussing your specific case with a qualified CPA. For example, the above math doesn't apply if you're planning on donating stocks directly, because of capital gains tax (it's complicated).


Specific Funding Recommendations

The orgs below take different approaches. Some focus on catastrophic risk directly, others on adjacent concerns like child safety, worker protection, or chip security. Read the descriptions and pick what fits your priorities.

Humans First Action

Here's the reality…
Congress is barely in session.
Big AI is moving at full speed.
And the window to put real safeguards in place is closing fast.

—Humans First, D.C. Town Hall Tour
  • What: Humans First Action is a conservative, populist advocacy organization focused on "an America First approach to the future of AI." They recently finished a Town Hall Tour spanning 10 cities in 6 states.
    • "We are not going to sit back while a handful of billionaire tech executives reshape our economy, our families, and our future."
    • They also have a c3, but I would specifically recommend donating to their c4 if you can.
  • Link to donate: humansfirst.com/donate

ControlAI

  • What: ControlAI is an organization that "campaign[s] on preventing extinction risk." They have briefed 250+ lawmakers across the US, UK, Canada, and Germany.
    • They've also collaborated with TIME, The Guardian, Sci Show, and other content creators to produce educational content regarding the risks of superintelligence.
  • Contact to donate: hello@controlai.com

Alliance for a Better Future

  • What: Alliance for a Better Future is a conservative organization focused on child safety, job loss due to automation, and the democratic principles of America that should guide how we think about building AI and who gets to make decisions surrounding its development.
    • Sen. Marsha Blackburn (who proposed the Trump America AI Act) has said: "The Alliance for a Better Future represents the overwhelming majority of Americans who want to see Congress establish safeguards for AI. We're grateful to have their support as we work to codify President Trump's AI agenda to protect Americans and empower innovators."
  • Contact to donate: contact@betterfutureai.org

Center for AI Safety Action Fund

  • What: CAIS's Action Fund has four primary policy priorities: (1) ensuring the U.S. retains its competitive advantage over China as it relates to AI chip manufacturing, (2) implementing and enforcing export controls, (3) preventing bad actors from using AI to cause harm, and (4) advocating for international cooperation to advance safe AI.
    • They spent $310,000 on lobbying in 2025 and work closely with CAIS's D.C. branch.
  • Link to donate: safe.ai/donate

AI Policy Network

  • What: A bipartisan organization that lobbies Congress on "establishing effective guardrails, and ensuring the United States remains both dominant and safe." Some of their work focuses on loss of control risk, and they are one of the only registered lobbying entities that speaks about superintelligence directly to the national security establishment.
  • Link to donate: theaipn.org/support-us/

Public First Action

  • What: A bipartisan advocacy organization focused on child safety, supporting workers, and safeguards to protect the general public. Also does education/awareness-type work on AI risk. Seeded with a $20m donation from Anthropic.5
  • Link to donate: publicfirstaction.com

A note about PACs and political campaigns

Some practitioners in the AI safety policy space believe donations to politicians and PACs are meaningfully more impactful than the best C4s. Zach Stein-Perlman, who advises donors in this area, has estimated roughly "5x as impactful."

I haven't independently verified this, but I think it could be true, depending on assumptions surrounding timing, electoral leverage, and the specific donation opportunities Zach might point you towards. (Also worth flagging: Meta is spending $65M this year on super PACs designed to elect AI-friendly state officials— the biggest election investment in the company's history. That's a pretty good sign this kind of giving works.)

However, there are trade-offs: PAC contributions are publicly disclosed, more legally complicated, require US permanent residency, and there are contribution limits; there might also be reputational risk for recipient political candidates depending on who you are (e.g. see Carrick Flynn case study; I think being backed by SBF almost certainly hurt his odds).

That said, if you're considering donating $20K+ and willing to handle the complexity and public disclosure, PACs may be worth exploring. Zach has offered to advise on this (more on how to reach him below).

For most readers, the C4 case I've made above is probably still the right starting point.


Next Steps

The AI safety field is flush with money, yet starving for advocacy. The next billion dollars to CG or a DAF probably won't solve this problem.

If you care about AI safety and you were planning to donate this year: before defaulting to a c3, run the math above and seriously consider whether a c4 gift at equivalent personal cost would do more.

If you want to talk this through before giving, here are some options:

  • Reach out to me directly at kairos@stanford.edu. I'm happy to think through your situation and point you toward people in the ecosystem whose judgment I trust for your specific case.
  • Contact Aidan O'Gara at Longview, who has comprehensively reviewed the C4 policy advocacy ecosystem and welcomes outreach from donors considering $100K+ at aidan@longview.org.
  • Contact Zach Stein-Perlman on the EA Forum, who has offered to advise US permanent residents considering $20K+ donations to politicians and PACs, which he estimates could be roughly 5x more impactful than the best C4s.6

Pick whoever fits. Different people are better positioned for different donor profiles.


Conclusion

This is an opportunity to maximize your counterfactual impact in a way that almost no other giving can. Giving well is hard. It takes research, judgment, and time. It would be easier to park money in a DAF and forget about it, or forward donations to whoever's most legible, or give to people you like, or just not give at all.

But the political work that needs funding right now is work that most institutional funders can't or won't touch. The dollars sitting in your account are structurally different from every other dollar in the ecosystem, and deploying them well might matter more than any other decision you make this year.

Thank you to Jack Douglass for helpful discussion & feedback.


2. Lower the bar for individual donors

A new platform focused on maximizing donors' counterfactual impact

Right now, if you're an individual donor who wants to fund AI safety advocacy work, your options are limited. You can spend weeks doing your own research, or you can hand your money to CG or Longview and accept their priorities as your own.

Manifund has partially solved this problem for small grants to individuals and early-stage projects. But there still exists a meaningful gap with respect to donations to established organizations doing political advocacy, lobbying, and movement-building— established organizations that already have a track record and credibility, but that institutional funders won't or can't support.

Individual donors should have a way to discover and evaluate these organizations without having to become full-time grantmakers themselves. What we need is a curated pipeline of credible, funding-constrained organizations— vetted for basic operational competence, with transparent financials and clear theories of impact— that a donor can review, compare, and fund with confidence.

Rather than "Manifund for bigger grants," This proposal is focused on creating some sort of aggregated information database to connect independent donors with established organizations that fall outside the institutional funding ecosystem.

By limiting it to organizations rather than individuals, and by requiring a baseline of credibility, the platform would dramatically reduce the evaluation burden on donors while still preserving their ability to make independent, decorrelated bets.

Better credibility signals on Manifund

Manifund has built valuable infrastructure for small-scale regranting, but if you land on the platform today, you'll see dozens of projects with no easy way to tell which ones have been vetted or which are endorsed by regrantors. This matters because many individual donors (especially, say, a busy Anthropic employee) don't have the time to read through 20+ project descriptions to see who to give to.

The platform could become dramatically more useful for incoming donors if it had clearer credibility signals. Some preliminary ideas from me:

  • Regrantor-created tier lists ranking their highest-conviction opportunities
  • Visible endorsement flairs on projects that regrantors have reviewed and recommend
  • A "Manifund staff picks" badge for opportunities the team has specifically vetted

3. Seed new organizations

Specific suggestions regarding what should exist, but doesn't:

  • Talent scouting & career transition. The AI safety field has roughly 1,100–few thousand people working on existential risks from AI. For comparison, the Nature Conservancy alone has 4,000–10,000 employees. The entire existing talent pipeline (MATS, BlueDot, 80,000 Hours, ARENA) relies on self-selection: people find AI safety on their own and apply to hyper-competitive programs. We should consider seeding an organization focused on proactively recruiting top talent from fields like neuroscience, intelligence analysis, physics, math, and engineering.
  • Professional lobbying & political advocacy. I've already gone into detail about the asymmetry in what safetyists and accelerationists spend in Washington; retaining a K Street firm or building a dedicated 501(c)(4) would make AI safety one of the most active advocacy voices on the Hill.
  • Public engagement & media. 73% of Americans support mandatory safety measures for AI, but there is no dedicated media operation translating that concern into political pressure.
    • The Social Dilemma, produced by the Center for Humane Technology, became one of Netflix's most-watched documentaries of 2020 and shifted the entire discourse around social media harms.
    • Currently, AI safety's closest equivalent is The AI Doc, which features interviews with Sam Altman, Dario and Daniela Amodei, and Demis Hassabis and received a substantial amount of media attention. But there is no dedicated, ongoing AI safety media operation focused on educating the general public and encouraging people to get involved.
  • Polling & public opinion research. AIPI has published dozens of polls showing overwhelming bipartisan support for AI safety measures and has been cited by Public Citizen in live policy fights. However, the field lacks ongoing, granular research into what sort of messaging resonates most with voters, especially in swing states. A/B testing of specific campaign frames tells advocacy organizations how to actually move people to act. This research would be a force multiplier for every advocacy dollar spent.

We should also consider establishing the AI safety equivalent of an XPRIZE or a Millennium Problem— a public, high-prestige challenge that signals to the world what the field considers its most important unsolved problems, and offers substantial monetary rewards for people who solve them.

1 Note: I'm mostly using c3 vs c4 as a shorthand for research vs. advocacy; it's important to note that some c3s do engage in lobbying (though, again, they are legally limited in how much they can do). Also, there are other ways to donate to advocacy-shaped things, such as donating directly to PACs and political campaigns; I talk a bit more about direct donations of these kinds later on.

2 Caveat: their page says "featured grants," so it's possible that there are more grants not listed on the page, but I don't know how to verify this. The asymmetry with respect to lobbying and advocacy still pretty clearly exists, though (keep reading for more stats!)

3 Foundations can technically donate to c4s, but only for non-partisan activities. Even then, "private foundations must follow a specific grant-making process called expenditure responsibility when providing grant funds to 501(c)(4)s because they are not public charities. For this reason, private foundations have historically preferred to limit their grantmaking to 501(c)(3) public charities to avoid navigating these more onerous rules" (Alliance For Justice).

4 State-level deductions may still apply depending on where you live— another reason to talk to a CPA about your specific situation.

5 Flagging again: this donation is less than a sixth of Leading the Future, and the money "isn't allowed to be used in the midterm battles."

6 I haven't personally chatted with Aidan at Longview or Zach Stein-Perlman, nor have I reviewed Aidan's specific c4 recommendations. I plan to do both as I do deeper research into PACs and other c4 orgs not on this list, for a separate project. I'm flagging their public offers to advise donors, not endorsing the specific advice they'll give— if you reach out to either, you may get framing that diverges from this post, and either could be right for your situation.