top of page
Search
  • Writer's pictureRicky

Justice for Assholes (and why we need Utopias)

I’ve been putting together a syllabus called Utopias that draws from a bunch of fields (philosophy, literature, anthropology…) to try to get a handle on our thinking about flourishing. And no, I don’t think that’s a waste of time.


But to explain why, I gotta talk about total assholes, as defined in my dissertation.


I don’t think anyone is really a total asshole, but some folks come pretty close.


If you know, you know, and if not, this guy spent months in a Romanian jail on charges of human trafficking, organized crime, and rape.

So let me distinguish between your self-interest and your well-being.


They’re both about what’s ultimately good for you. That means money doesn’t count, because the value of money is exhausted by talking about what it can get you. Money is just a tool, a useful instrument for getting other things: food, shelter, goods, services, superyachts, peace of mind…maybe you could even light it on fire to stay warm in a pinch.


But beyond its usefulness, money isn’t actually good for you. Philosophers summarize this by saying that money is only instrumentally good for you.


But what can money get you? Maybe a bunch of other useful things (A new suit or car? A better interest rate?). But what are those things useful for? At some point, it’s fair for us to go, Useful for what? What's the point? And that’s where ultimate good comes in.


The whole conception of usefulness depends on the idea that all these chains of usefulness have to ground out in something that matters, dammit, something ultimately good for you that isn’t just useful for getting something else. Maybe being happy is useful for you if it helps you work harder and be more charismatic and earn a promotion at work or whatever. But additionally, happiness is (at least part of) the ultimate point of all this runaround.


Of course, philosophers disagree about what’s ultimately good for you. Is it just happiness that matters in the end? What about getting what you want, or maybe what you should want? Or is there some objective list of things that would be good or bad for you? You must have some ultimate good, we’re just arguing about what it involves.


Make sense?


So here’s the difference between self-interest and well-being:


Your well-being describes what’s ultimately good or bad for you, full stop. Personally, I think love and happiness are at the top of the list, but probably a lot of different things are ultimately good or bad for you.


But your self-interest describes what would be ultimately good or bad for you, considering everything and everyone else around you as just useful for you. So imagine treating others as more or less useful objects and going, okay, what’s ultimately good for me, considered in isolation? That’s self-interest.


Where your well-being describes what’s ultimately good and bad for you, your self-interest distorts it in crucial ways.


Self-interest views value in your life transactionally: What’s in it for me? But notably, the thing that makes crucial goods like love so good for you (the whole part where you care about your beloved for their sake) can’t really be captured within this narrow perspective of self-interest.


So if you were purely self-interested, you’d live cosmically alone, unwilling to recognize the ultimate good of anything or anyone else besides yourself. You’d miss out on most of the value and meaning in your life. That’s the thought behind my deprivation account of why being an asshole is so bad for you.


But how would anyone end up purely self-interested? Here are three ways you might get there:


First, you might be a solipsist who doubts that others exist at all. If only you exist, and everyone else is a figment of your imagination, then genuinely interpersonal or cooperative relations simply do not arise. So why not maximize how your life goes for you at all costs? You are morally special—indeed, you’re morally singular.


I won’t argue against the solipsist, but you probably aren’t one anyway. So let’s assume that you’re willing to grant that others exist. You might still be what Aaron James calls a psychopath, someone totally unmotivated by morality. You might be morally incompetent, failing to recognize that your interests and the interests of others can come into genuine conflict. Or you might understand moral reasoning but just not care. Either way, if morality doesn’t motivate how you live, we can kinda see how you could come to value nothing but your own self-interest.


Fortunately, you’re probably not a psychopath either.


But if you don’t doubt that others exist, you’re morally motivated, and you still only care about your own self-interest, the final possibility is that you’re a total asshole. James’s definition of assholes is such a handy field guide for dealing with them that I quote it whenever I can. He says that in interpersonal or cooperative situations, the asshole:

1) allows himself to enjoy special advantages and does so systematically; 2) does this out of an entrenched sense of entitlement; and 3) is immunized by his sense of entitlement against the complaints of other people.

By radically prioritizing his own self-interest, the asshole ends up treating himself as morally special, and us as merely useful for him.


That’s why assholes piss us off so much. It’s not just that they take more for themselves. It’s that they feel entitled to do so because they think they’re better than us, and then they won’t even really hear our complaints about it!


James contrasts the asshole and the psychopath to draw out the asshole’s deeply moral motivations:

However misguided, the asshole is morally motivated. He is fundamentally different from the psychopath, who either lacks or fails to engage moral concepts, and who sees people as so many objects in the world to be manipulated at will. The asshole takes himself to be justified in enjoying special advantages from cooperative relations.

James’s point is that the psychopath operates beyond moral justification, whereas the asshole operates from a distorted sense of self-entitlement.


But not just any asshole would value nothing besides his own self-interest. Let’s stipulate that only a total asshole would feel entitled enough to be purely self-interested. It takes a lot of entitlement to give supreme weight to your own self-interest every time you’re deciding what to do.


Again, call me naïve, but I don’t think anyone is quite this far gone, even if some folks come pretty close. (I’m not sure there are genuine psychopaths or solipsists either, which is why I doubt anyone is purely self-interested.)


Okay, so here’s why this matters for Utopias.


The greatest political philosopher of the 20th century defends a super influential theory of justice as fairness. But really, it’s justice for assholes.


And it’s messed up political philosophy for the past half century.

Way to go, nerd.

In A Theory of Justice, John Rawls introduces the original position as a tool to help us figure out what justice looks like. Imagine if a bunch of hypothetical rational people came together to deliberate about what justice requires. Whatever they would unanimously agree to is what Rawls thinks we should accept. And then Rawls does a weird thing. After all, getting everyone to agree is tricky:

if a man knew that he was wealthy, he might find it rational to advance the principle that various taxes for welfare measures be counted unjust; if he knew that he was poor, he would most likely propose the contrary principle. To represent the desired restrictions one imagines a situation in which everyone is deprived of this sort of information. One excludes the knowledge of those contingencies which sets men at odds and allows them to be guided by their prejudices. In this manner the veil of ignorance is arrived at in a natural way.

In the original position, these hypothetical rational people (Rawls calls them parties) are placed behind a veil of ignorance to deliberate fairly until they all agree. Behind this veil, Rawls makes them all-knowing about whatever general facts would be relevant to their deliberations—say, facts about economics or human psychology—but totally ignorant of any particular facts about themselves—say, facts about their individual talents or social class or even their own values. So now the parties know all the generals and none of the particulars of what they’re talking about (how to structure their society). The veil of ignorance is designed to force them to bargain as equals: “Now consider the point of view of anyone in the original position,” Rawls says, “There is no way for him to win special advantages for himself.”

But this language of special advantages should remind us of the asshole. Isn’t it suspicious that the parties feel entitled to special advantages, but simply don't know how to get them? The whole point of the veil of ignorance is to prevent them from doing what they all desperately want: to act unjustly. For example, the parties are not allowed to consider conceptions of ‘justice’ under which everyone else would serve their interests as dictator, or everyone else would be bound to act justly but they could play free rider, but only due to a formal technicality. These conceptions of justice would require them to know who they are so they could pick themselves out for special treatment. But they don’t. In this way, gross injustices are avoided. No one can insist on being held to different rules if he can’t identify himself from behind the veil of ignorance. But there’s something fishy going on. While Rawls stresses that the content of the parties’ underlying values (their ‘conceptions of the good’) need not be egoistic, the form of the original position still isolates their interests in a very particular way. Rawls stipulates that

the parties in the original position are mutually disinterested: they are not willing to have their interests sacrificed to the others. The intention is to model men’s conduct and motives in cases where questions of justice arise. The spiritual ideals of saints and heroes can be as irreconcilably opposed as any other interests.

Talk about polarization! In the original position, everyone is in it exclusively for himself. But because no one knows his particular circumstances—including his particular conception of the good—each makes a grab for as many primary social goods (such as rights, liberties, opportunities, income, and wealth, that “normally have a use whatever a person’s rational plan of life”) as he can reasonably claim. As a result, the parties are purely self-interested in acquiring these general-purpose goods:

the persons in the original position try to acknowledge principles which advance their system of ends as far as possible. They do this by attempting to win for themselves the highest index of primary social goods, since this enables them to promote their conception of the good most effectively whatever it turns out to be. The parties do not seek to confer benefits or to impose injuries on one another; they are not moved by affection or rancor. Nor do they try to gain relative to each other; they are not envious or vain. Put in terms of a game, we might say: they strive for as high an absolute score as possible.

Rawls quickly recants the image of a game, but only because there is no conception of winning beyond getting as many points for oneself as possible. But that’s already pretty damning. Because the parties are mutually disinterested and lack access to any finer-grained conception of their own good, their patterns of concern extend no further than their own individual interests, considered in isolation. They are purely self-interested, willing to sacrifice infinitely of others if only they could. But let’s take stock. They parties can’t be solipsists—they acknowledge the existence of one another in the course of trying to reach a fair agreement. And they can’t be psychopaths either—Rawls repeatedly stresses that the original position is designed to give the parties equal representation as moral persons, which involves granting that they have a capacity for both a sense of justice and a conception of the good. So essentially, Rawls creates a hypothetical congress of total assholes that’s all-knowing of relevant generals and totally ignorant of relevant particulars and asks what they’d agree upon. But that’s not justice as fairness. That’s justice for assholes. Because the parties bargain as purely self-interested rivals, they lack what Paul Ricoeur calls “a true feeling of cooperation,” of standing with one another. Justice, which is grounded in mutual recognizing each other’s dignity and standing, cannot yet be at issue. By not taking a stand on what’s ultimately good, Rawls ends up asking about an idealized bargain between total assholes, which seems puzzlingly useless at best and harmful at worst. I don’t care what they’d agree to. In response to Rawls’s theorizing, a whole bunch of philosophers have become very suspicious of ideal theory. It’s super contentious how to define ideal theory (of course it is), but it’s basically when you start doing very abstract hypothetical reasoning like this about what perfect justice would look like, instead of beginning in the world we’re in and trying to diagnose and fight the real injustices all around us. And look, these philosophers have a great point. Gross injustice is way easier to recognize than perfect justice. And as Marx said, the point of theorizing about politics is presumably to like, do something about it at some point. We need non-ideal theory and we need political action. But I think we need a little bit of ideal theory, too. It just shouldn’t be so narrowly focused on hypothetical self-interest. And it needs to take a real stand on what is ultimately good and bad for us. That means we need ideal theory about well-being. We need to think about Utopias, and start comparing different conceptions of flourishing. How high can we raise the ceiling? What might flourishing look like for beings recognizably like ourselves? And how might we change to flourish better? Sure, a bit more therapy and personal growth would probably be good for all of us. But how much of the problem is what we’re like now? That’s a key question as soon as we start thinking about Utopias: How much would we have to change, and how much would society? The weirdest part is that Rawls’s theory is pretty deeply conservative, in the sense that he thinks these rational, purely self-interested deliberators would agree on a conception of liberalism that looks a heck of a lot like a marginally idealized version of his own contemporary society in 1971. He doesn’t think anything has to change that much. And past Utopias are like this, too. Plato’s Republic and St. Thomas More’s Utopia both feature slavery. And everyone dunks on Hegel because his conception of justice looks a little too much like 19th century Prussia. It’s hard for us to stretch our social and political imaginations very far. But if you don’t have some kind of positive vision to work towards, you just end up making marginal changes around the edges and leaving the center intact. You might even get so fixated on protecting what you already have that you spend all your time worrying about losing it. I’ll be the first to admit that x-risks (existential risks) and even s-risks (suffering risks) are real concerns, but they’re certainly not the only thing we should care about. What we desperately need right now is to engage with the ultimate goods that are supposed to be at the center of human life. What’s ultimately good or bad for us? Stop promising me general-purpose goods based on hypothetical deliberations and start imagining with me how our lives might be better for us. And by the way, we’re making more and more effective AI all the time that at some point will automate more and more labor away. But we’re worried about that rather than celebrating because our current society ties our survival and even our identity to our labor. We gotta have a talk about what kind of society we want to live in when 10%, when 30%, when 90% of the human population becomes structurally unemployed. Would you want to live in post-scarcity or a cyberpunk dystopia? Great, can you explain the difference? So we gotta talk about Utopias to get a handle on what’s good for us. Not what would be good for a bunch of blinkered ‘rational’ assholes.

(Hypothetical or otherwise.)

That's why AI ethics needs Utopias.

Related Posts

See All

Sign up for more philosophy in your life!

Thanks for subscribing!

bottom of page