Effective Altruism (EA) is one of the most intellectually rigorous and empirically grounded moral movements of our time. It encourages people to think critically about how to do the most good with their resources, whether by reducing suffering, extending lives, or mitigating existential risk. Its appeal lies in its clarity of purpose and willingness to act on difficult moral math.
But behind this clarity lies a fundamental error: the assumption that its values are not just effective, but objectively true.
This post argues that while EA is a coherent and admirable voluntary moral framework, it has no claim to moral universality. If value is subjective—as we've argued throughout this series—then so is morality. EA’s conclusions follow from its premises, but the premises are optional. What EA lacks is what no moral theory has ever supplied: a non-subjective source of value.
Effective Altruism is usually grounded in a kind of modernized utilitarianism. The logic is elegant:
Maximize good outcomes.
Use evidence and reason to compare interventions.
Measure value using proxies like QALYs (quality-adjusted life years), suffering reduction, or expected impact.
Prioritize the most impactful actions—regardless of emotional appeal or proximity.
But this structure only works if you assume that utility is objectively comparable across people and contexts. That assumption cannot be defended once you accept that all value is subjective.
Utilitarianism relies on interpersonal comparisons of utility and aggregation over entire populations. But:
If value is always agent-relative, whose values are being aggregated?
If there is no objective vantage point, from where is the utility function being calculated?
If moral imperatives are just expressions of preference, then “maximizing utility” is only a duty for those who endorse that utility metric.
You can build an agent-relative utility function—mine, yours, EA’s—but you can’t claim it’s universally valid without smuggling in objective value through the back door.
To compare and sum up different people’s utilities, you need either:
A shared vantage (e.g. a collective agreement on moral goals),
Or a meta-agent who can coherently weigh everyone’s values from above.
But such a meta-agent doesn't exist. Nature is indifferent. God is silent. And reason, while powerful, does not generate values—it only operates on them.
There is no view from nowhere. There are only overlapping and conflicting subjective valuations.
So when EA says we “should” prioritize malaria nets over guide dogs, it’s only true if you share EA’s value function. If you don’t, then there is no contradiction in doing otherwise.
From the outside, EA may look like a universal moral program. From the inside, it often feels like one. But from the vantage of a subjectivist framework, EA is best understood as a high-agency moral identity: a set of goals, metrics, and heuristics chosen by agents who want to act in certain ways.
That’s not a flaw. It’s a feature.
It means EA is:
Voluntary
Consensual
Internally coherent
Agent-owned
It is not:
Morally binding on others
A consequence of reason alone
Justifiable as a universal imperative
EA is a LARP—but one that saves lives. That’s more than can be said for most moral frameworks.
Effective Altruists don’t need moral realism. They need shared goals and coordinated action. They need rigor, creativity, and integrity—not metaphysical certainty.
Let go of the illusion of moral objectivity, and EA becomes stronger, not weaker: a community of agents acting on deeply held values, not missionaries enforcing imaginary absolutes.
In a universe without objective value, the only morality that matters is the one you choose. EA is a good choice. Just don’t pretend it’s “true” or the only rational one.