NOTE: I'll be doing specific funding recs + conclusion + rewriting intro. For now, would appreciate feedback re: the body and the argument in general!

The highest-leverage path for donors looking to give to AI safety right now is to give less money, but to more neglected destinations.

Specifically, you should strongly consider routing what would have been a c3 gift to a c4 instead, calibrated to equivalent personal cost, even if the net dollar amount of your donation is significantly lower.


c3 vs c4: what's the difference?

501(c)3 organizations are very limited in terms of the amount of money and time they can spend on lobbying and political advocacy-shaped work. They're also strictly prohibited from participating in political campaigns or supporting candidates:

Under the Internal Revenue Code, all section 501(c)(3) organizations are absolutely prohibited from directly or indirectly participating in, or intervening in, any political campaign on behalf of (or in opposition to) any candidate for elective public office… Violating this prohibition may result in denial or revocation of tax-exempt status and the imposition of certain excise taxes.

—IRS.gov

Donations to 501(c)3s are tax-deductible, but the trade-off is that these organizations can't do much by way of direct political advocacy— unless they're willing to put their c3 status at risk of being revoked entirely.

In contrast, 501(c)(4)s can spend unlimited amounts on lobbying and engage in direct political advocacy, as long as their primary purpose remains social welfare. Donations to c4s aren't tax-deductible, but they have far more freedom to engage in the political process.


why does it matter?

Two points I'll be making in this section:

  1. AI safety 501(c)(3) work is substantially less funding-constrained than 501(c)(4) work, and as a result,
  2. The marginal dollar spent on lobbying and political advocacy is more important than the marginal dollar spent on research.

AI safety 501(c)(3) work is substantially less funding-constrained than 501(c)(4) work

Here are some examples of grants given to organizations in the AI safety space last year:

  • FAR.AI, a research nonprofit, secured north of $30m from several different funders to scale their technical safety research.
  • Redwood Research received a $36,566,000 grant from Coefficient Giving to advance their work on AI control and alignment faking.
  • Bluedot Impact, a talent accelerator program, received $25,649,888 in general support from CG.
  • MATS, an AI safety research fellowship, received nearly $40m from CG.

All of the above are 501(c)3 organizations.

Let's zoom in on CG for a moment, since it's the biggest funder in the field. Their TAI fund appears to have made 158 grants in 2025.1

Of the 158 listed, a grand total of three appear to have been for policy work: RAND Corporation ($2,000,000), Institute for AI Policy and Strategy ($11,510,081), and Training for Good ($461,069).

Notably, none of the above organizations can engage in substantial lobbying or advocacy work. RAND is a c3, IAPS is fiscally sponsored by a c3, and Training for Good received funding earmarked for a policy fellowship.2

(I encourage you to spend some time on Coefficient Giving's Navigating Transformative AI Fund grant database and examine the grants yourself to verify the c3/c4 asymmetry.)


So a bunch of organizations in the AI safety space received tens of millions of dollars each last year. That's great! But do you know what else happened last year?

The Center for AI Policy (CAIP), one of just a handful of organizations engaged in direct lobbying for safety legislation, shut down due to lack of funding.

CAIP shut down in September 2025 because they couldn't raise $150K/month (equivalent to $1.8m/year); simultaneously, c3 research orgs received tens of millions.

The problems CAIP faced are indicative of a broader pattern:

  • We don't have enough funding going towards political AI safety work.
    • Part of this is due to the concentration problem: when the vast majority of funding flows through a handful of institutional funders like CG, and those funders face structural and reputational constraints that bias them toward 501(c)(3) research over 501(c)(4) advocacy, the entire field inherits that bias.
    • Beyond my brief empirical exploration of CG's TAI Fund, more evidence to support this can be found in a post by CAIP's director (he cites someone familiar with the space, who estimates that ~1-10% of AI safety funding goes towards advocacy, with a best estimate of 2.5% per year).
  • As a result, we don't have enough talent engaged in AI safety advocacy.
    • Specifically, CAIP's executive director estimated that there are roughly 3-4 AI safety governance researchers for every AI safety advocate— approximately 200 researchers to fewer than 60 advocates in the US.3

lobbying and political advocacy are far more neglected than research, and marginal dollars here go further as a result

To put it bluntly, the lack of talent and funding engaged in AI safety lobbying is why the accelerationists are winning in Washington.

Some notes:

  • Lobbying works, which is why the AI industry spends so much on it. Registered lobbying firms earned "almost $92 million in the first three quarters of 2025" from AI-related issues alone.
    • "More than one in four federal lobbyists are now pushing AI-related agendas, according to a new report from Public Citizen—and they are overwhelmingly working for corporate interests seeking to influence federal AI policy, or block state rules over the industry."
    • "Over 500 organizations have lobbied the White House and Congress on artificial intelligence policies in the first half of 2025"
  • The asymmetry is staggering:
    • On the industry side:
      • OpenAI spent $2.99 million on lobbying in 2025– up from $260,000 in 2023. 11 Big Tech companies spent over $105 million on federal lobbying in 2025. Beyond lobbying, AI companies donate hundreds of millions to super PACs. Leading the Future alone has a $125 million war chest, funded heavily by OpenAI's president Greg Brockman.
    • On the safety side:
      • The CAIS Action Fund spent $310,000 on lobbying in all of 2025. As of Q1 2024, CAIS Action Fund and Center for AI Policy had a combined 10 registered lobbyists between them (that was, of course, before CAIP shut down). Public First Action, a c4 focused on lobbying for safety legislation, received a $20m contribution from Anthropic (less than a sixth of Leading the Future). AnthroPAC, Anthropic's new safety-aligned PAC, runs entirely on voluntary donations from employees, capped at $5k per person, per year.

In December 2025, Trump signed an executive order to thwart state-level AI regulation. In 2024, SB-1047 was vetoed in California after intense industry lobbying.

Right now, the industry is winning the legislative war because AI safetyists are not putting up a fight.


the pitch: donate less, donate c4

Whether or not CAIP specifically was the right bet, the gap between funding for research and funding for advocacy is enormous— and someone needs to fill it.

If you're an individual donor with no institutional constraints, you are uniquely positioned to do so.


some math because I aspire to be a Real Asian™

A top-earning donor giving $1M to a c3 actually spends ~$650K after the federal deduction (capped at 35% under the One Big Beautiful Bill Act, effective 2026).

This means that $650K, given directly to a c4, represents an equivalent personal cost with arguably greater counterfactual impact.

The Formula

C4-equivalent = X − [(X − floor) × effective deduction rate*]

Where:

X = intended c3 gift

floor = 0.5% of Adjusted Gross Income (a new rule under the One Big Beautiful Bill Act says you can't deduct first 0.5% of AGI donated to c3 organizations)

effective deduction rate = at a federal level, capped at 35% for top-bracket donors

*State taxes may decrease the C4 equivalent, especially in higher tax states

So, for a donor with $5M AGI giving $1M to a c3, these are roughly the numbers:

Floor = $25,000 (non-deductible portion of c3 gift)

Deductible portion = $975,000

Tax savings = $975,000 × 35% = $341,250

Net personal cost of c3 donation = $658,750

This is the equivalent that should be donated to a c4.


the floor: a thing that is important to know for mid-tier donors

Starting in 2026, the first 0.5% of a donor's AGI in charitable giving confers zero federal tax benefit under the One Big Beautiful Bill Act's new deduction floor.

For a donor whose total c3 giving falls at or below that floor, c3 and c4 donations will cost exactly the same, because the amount you're actually donating is equal to your personal cost (there is no deduction). In this instance, the tax case for c3 donations disappears entirely, and c4 wins on any positive impact multiplier.

As an example, let's say we have an Anthropic engineer with $3M AGI who gives $15K to AI safety. The floor is $15K, so zero of that engineer's $15K gift is deductible, meaning c3 and c4 are tax-identical.


caveats so I don't get sued

I am not your financial advisor and would recommend discussing your specific case with a qualified CPA. For example, the above math doesn't apply if you're planning on donating stocks directly, because of capital gains tax (it's complicated).

1 Caveat: their page says "featured grants," so it's possible that there are more grants not listed on the page, but I don't know how to verify this. The asymmetry still pretty clearly exists, though.

2 Training for Good is/was also probably a c3, though their website is inactive so I wasn't able to verify this.

3 Reminds me of the 30-60 AI safety grantmakers figure.