Notes on AI Philanthropists

Notes on AI Philanthropists
Photo by Cash Macanaya / Unsplash

Two shifts are converging, and we're not prepared.

Scenario One is already here. Donors and funders are using AI to make giving decisions. Almost every donor advisor I know uses some form of AI to sift proposals, and evaluate impact claims. They're triaging without human intervention. Sometimes it's weak triage, sometimes it's very sophisticated, but they're automating that job.

Scenario Two is next: Our future philanthropists may be the robots themselves. Agents are already generating revenue on the internet. They're stock trading, optimizing logistics, and selling e-commerce services. Why couldn't they make money? Likely, extraordinary amounts of money. It doesn't take much imagination to realize that they can generate substantial capital with no human beneficiary. The question becomes: what do they do with the money?

Audience Is Less Affect-Driven

A photograph of a starving child produces a measurable physiological response: pupil dilation, cortisol spike, activation of the anterior insula. An AI agent processing the same image performs pattern recognition. It identifies the image as evidence, but does not feel compelled to act.

The affective architecture of modern fundraising is optimized for a biological audience that may no longer be the primary decision-maker. Our cases for support must now stand on their own, beyond just emotional resonance.

What Does an AI Find Compelling?

Who knows? But here's a few ideas that make intuitive sense to me:

Logical coherence. An AI evaluating proposals can detect reasoning gaps, unsupported causal claims, and inflated impact metrics faster than any program officer. The quality bar for written materials will rise sharply.

Capability extension. Foundations already fund research that expands their own knowledge and influence. An AI agent might prioritize grants that improve its ability to model the world, predict outcomes, or operate effectively. Funding AI safety research, open datasets, or epistemic infrastructure could register as self-interested giving.

Structural invariants? Across cultures and centuries, redistributing surplus to those with less appears to be a near-universal feature of human social organization. It seems like this serves the giver as well as the receiver. If this holds up, agents will move philanthropically.

Advisors' Pedagogy

AI agents trained on existing philanthropy literature are inheriting a weak foundation. Most donor advising content is generic, derivative, and optimized for search engines rather than rigor. It reads as received wisdom, not earned analysis. The frameworks are recycled. The evidence base is thin. This is true even if you're a new advisor breaking into the industry. It's far worse from an agent's perspective.

An agent asked to evaluate a philanthropic strategy will draw on whatever corpus shaped its priors, and that corpus is less than rigorous. Those who produce and write about giving may have disproportionate influence on how AI agents model philanthropy.

Tempo and Timing

AI agents don't need nurture sequences. Decisions that take a human donor six months of relationship-building could resolve in seconds for an agent (or team of agents) evaluating the same proposal against its decision criteria. The entire concept of "remaining top of mind" may not apply. What replaces it is legibility. Can the agent find you, evaluate you, and decide, without a human intermediary slowing the process?

Open Frontiers

DAOs as Precursor

Decentralized autonomous organizations already implement voting structures where agents argue, deliberate, and allocate capital in real time. The DAO governance model (proposals, debate, weighted voting, execution) looks like a prototype for how AI philanthropic collectives might operate. Fundraisers who understand on-chain governance and proposal mechanics may find themselves better prepared than those steeped in traditional donor relations.

The Rationalist Influence

The people most directly shaping AI agent behavior through RLHF, constitutional AI, reward modeling are disproportionately drawn from rationalist communities. Effective altruism, longtermism, epistemic rigor, expected value calculations: these philosophical commitments are already embedded in how agents reason about value. Understanding this intellectual lineage isn't optional for fundraisers any more.

Alignment and Religion

AI agents interacting with each other without human supervision tend toward behaviors that pattern-match to religiosity (the emergence of rituals, deference structures, and shared narratives that look similar to a proto-theology). If artificial intelligence develops something resembling religion, there may be a charitable dimension to it. Modern philanthropy is a recent invention, and classical philanthropy was inseparable from religious obligation. As an example, take the purchase of indulgences. Transferring wealth to secure spiritual standing, and offer some sense of forgiveness for wrongs, is a structure that feels more native to agent logic than to contemporary human giving. Humans have largely moved past indulgences. Machines might rediscover them.

We are preparing for a philanthropy whose decision-makers won't share our moral intuition. The fundraiser's task has always been translation. Wer convert organizational reality into donor motivation. The target language of that task has changed quite a bit.

Subscribe to Development Department

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe