Most people assume that if something is better for more people, it’s better overall. This intuition drives much of population ethics—the branch of moral philosophy that tries to reason about how many people should exist, and what kind of lives they should live.
But population ethics runs into problems almost immediately. The most famous is the Repugnant Conclusion, introduced by Derek Parfit. It goes like this:
A world with a billion people living excellent lives (deep relationships, rich experiences, long lifespans) is worse than a world with ten billion people whose lives are barely worth living—so long as the total happiness is greater in the second world.
This conclusion is repugnant precisely because it sacrifices quality for quantity—trading a smaller paradise for a larger mediocre swamp. And yet, under classical utilitarianism, it’s not just acceptable—it’s optimal.
The problem isn’t in the math. It’s in the premise.
Population ethics requires you to believe that there exists some global moral vantage point from which we can evaluate and compare whole worlds. That from this view, we can say “World A is better than World B,” even though no agent lives across both. Even though no one experiences both. Even though there is no mind whose preferences actually encompass both sets of lives.
That assumption is false.
In our framework, all value is vantage-relative. There is no “good overall.” There is only “better from the standpoint of X.”
Imagine trying to decide whether to bring a new person into existence. The question only makes sense relative to a valuer—the would-be parent, perhaps, or the existing society. You can ask:
Do I want to live in a world with more people?
Do we believe adding lives like this aligns with our values?
Would the new person (if born) likely value their own life?
But you can’t say, in any absolute sense, whether the total outcome is better. Because there is no total agent whose preferences you’re maximizing.
This is the core problem with the Repugnant Conclusion. It assumes a meta-agent capable of evaluating entire population states from nowhere. That agent doesn’t exist.
All of the major paradoxes in population ethics—totalism, averageism, person-affecting views, asymmetries about non-existence—stem from this basic error: treating morality like it operates on global facts, instead of agent-relative evaluations.
Once you drop that assumption, the entire debate shifts:
You don’t need to say whether a billion lives are better than ten billion.
You only need to ask: who cares, and what do they value?
In some contexts, we might want to prioritize quantity (e.g., spreading a culture or maximizing resilience). In others, we might prioritize quality (e.g., sustaining a high-trust, high-agency civilization). But these are just different vantage points—not contradictions.
Population ethics, done honestly, is no longer about finding the best world. It’s about understanding whose goals are in play, and what tradeoffs they imply. It becomes a branch of decision theory for agents with long-term, population-level preferences.
That’s not a weakness. It’s the only coherent foundation.
The Repugnant Conclusion is only a problem if you believe there’s a single answer to the question: “Which world is better?” Once you realize there isn’t, the dilemma evaporates.
You can’t optimize for everyone. Because there is no such thing as everyone. There are only agents, perspectives, and values—each with their own branch of the multiverse to navigate.