Here's a bad question:
How many people should there be?
Uh oh. That's renowned Oxford moral philosopher Derek Parfit, by the way.
I shall later ask how many people there should ever be. In a complete moral theory, we cannot avoid this awesome question. And our answer may have practical implications. It may, for example, affect our view about nuclear weapons.
Um, okay.
(I hate nukes, in part, because human extinction would be really bad, but I don't need to know how many people there should ever be to get there.)
Anyway, Derek Parfit is notorious in philosophical circles as the definition of a moral puzzler.
Parfit uses elaborate thought experiments to argue for striking conclusions, and he's a true believer in his own arguments. But I think his puzzles dare us to show where he's gone wrong, to display a deeper grasp of value than Parfit does himself.
This puzzle might be his most famous:
Derek Parfit asks us to imagine utopia—or at least, a world that's long and thin like a rectangle.
He presents us with two visions: Population A and Population B.
Then he asks us to imagine that everyone has the same quality of life in each population, ostensibly to simplify things.
(I guess that's one vision of utopia.)
So these two rectangles are actually graphs:
The width of each block shows the number of people living, the height shows their quality of life.
In A, each person has a higher quality of life, but there are twice as many people in B.
"Except for the absence of inequality," Parfit notes, these might be two possible futures for humanity based on our rate of population growth.
So which is better? (Are your spidey senses tingling?)
Here's the thing: Viewed as a graph, B has more area than A. The quality of life in B is much more than half the quality of life in A, so that rectangle is much bigger.
So which utopia is best? A or B?
Or C?
Well, Rectangle C has even more area, and if we imagine continuing to double the population while ensuring all a quality of life more than half of what was on offer for the previous letter, a really wide, squat Z is utopia.
(Or at least, Z's the best of these 26, and much better than A. After all, Z contains much more good than A, and more good is better than less good.)
But even Parfit goes, that's repugnant! We shouldn't believe that!
So he calls it The Repugnant Conclusion:
For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living.
But how did we get here? What went wrong?
Recently, noted effective altruists including Toby Ord and William MacAskill signed onto a paper called "What Should We Agree on about the Repugnant Conclusion?"
Finally, 37 years later, we should agree:
Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.
Wait, that's all we've been doing?
Oh my God, we need to move on from this puzzle, everyone!
There's just one problem:
I'm not sure these philosophers have learned the lessons of Parfit's puzzle.
I think many of them are overlooking a much deeper problem within their underlying moral framework:
the idea that more good is better than less good
In the not-so-distant past, this idea led effective altruists to argue that you should donate money to highly effective charities focused on those suffering today. It only takes $3,500 to save a child's life by donating to Helen Keller International. (Vitamin A is cheap.) If you spent that buying new washers and dryers for poor folks in your community instead, you couldn't even afford 3 sets from Home Depot.
And surely saving one Kenyan child's life does more good than giving two-and-a-half American families washers and dryers!
So should I not fix my washing machine when it breaks, and start going to the laundromat instead?
Here's how an effective altruist might reason: What if running to the laundromat takes so much time and energy away from your career that you can no longer earn as much money to give to highly effective charities?
In that case, you should fix the washing machine.
Remember, because more good is better than less good, it's better to do good more effectively.
These days, longtermist effective altruists like Ord and MacAskill spend more time talking about AI alignment research to keep AI and human values in sync than they spend talking about the global poor.
I do understand their alarm:
AI development undeniably presents us with potentially catastrophic risks.
But I don't think we should funnel all our money towards Silicon Valley! Nor do I think we should neglect those suffering now, either globally or locally, even if that does turn out to do 'less good.'
Good isn't the whole story when we're thinking about value.
If you tried to rank every song, every vacation, every kiss, in terms of how good they were, wouldn't that truncate your thinking about value?
If an abstract ranking seems too hard, here's a number line. Go ahead:
How could this possibly be a full or fair or even useful evaluation?
Lesson 1: There's more to value than just goodness.
Within moral reasoning, there are more concepts than just Good and Bad. (Here's another set: Right and Wrong, which I'll write about soon.)
Plus, within value reasoning, there are more concepts than just the Moral. There's the Aesthetic, the Epistemic, and so on.
Plus, goodness doesn't fit on a number line anyhow!
Look how weird this can get:
In this diagram, a life (I) starts out very good, (II) turns pretty bad from A to C, and then (III) ends up a little good right before the end. So, should you kill yourself at A if you could see the shape of your future life?
That's another bad question, setting aside concerns about how anyone could know the shape of their future life. We need to know much more about this person and their life than what this simple graph shows us.
We shouldn't understand 'the good' as a single, all-encompassing quantity and then try to graph it out. Thinking of goodness like that will mess up your practical reasoning about what to do.
Lesson 2: The good isn't graphable.
So comparing A and Z on a single overarching dimension of goodness doesn't work. Maybe we should alter the question:
Is the existence of Z more valuable than the existence of A?
First off, this makes it clear how morally suspect these puzzly questions are. (Remember, A and Z are populations.) Second, it becomes obvious just how much we aren't told, including:
how we reached A versus Z
the politics of A versus Z
what it's like to live as a member of A versus Z
what A and Z's worlds look like
And that's just a start. But Parfit gives us no detail beyond stipulating that everyone's quality of life is the same within each population.
Does Z really contain more good than A just because it has more area? I don't know what to think. 'More good is better than less good' feels like an obscene riddle masquerading as grammatical truth.
But without knowing about our other values, we definitely don't know how to compare A and Z, so I am much more confident about the following:
Lesson 3: More good isn't always more valuable than less good.
So I don't think we should try to do more good rather than less good at all costs, because there are other values to weigh the good against (the right, the just, the beautiful, the true, let's throw in the cool so no one accuses me of being a retrograde Platonist at least...).
So producing the 'most' good isn't the only thing that matters.
Let's review:
Lesson 1: There's more to value than just goodness.
Lesson 2: The good isn't graphable.
Lesson 3: More good isn't always more valuable than less good.
What do you think? I'm still figuring out where I stand on this, so drop me a comment.
Comments