(Stick around for a sneak peek at my latest manifesto!)
It’s been a busy week onboarding at Hopkins and continuing to get settled in Baltimore.
But now it’s time for me to start thinking about a new question:
How do I explain my work to non-philosophers?
Obviously, everyone is already a philosopher, etc. etc., but you know what I mean. Not everyone’s an academic philosopher.
But I am.
I have a PhD in this stuff now, and I’ve been working on certain themes for so long that I’m no longer sure what’s obvious and what needs explaining, what’s clear and what isn’t. I’m not even sure which of my intuitions are weird anymore! (Though teaching again definitely helps.)
It’s incredibly difficult to communicate expertise to non-specialists lacking your disciplinary assumptions and jargon.
But it’s so useful to try.
I didn’t really know how to frame my dissertation for academic philosophers until I turned it into a comedic story for a broader audience. And doing that even made my own work clearer to me! It made it really obvious what was essential, and what wasn’t.
So here’s how I describe my current work on my Research page, which is written for a broader but decidedly academic audience:
I am developing a sociohistorical account of how games and metrics have come to politically define rationality (via game theory and economics) in ways that make us unresponsive to the wisdom of the humanities.
And I am! I’m giving a talk later this month on exactly this topic.
But what the hell does any of that mean?
Well, before I give a social history of how we’ve come to think a certain way, we should start with another question:
What would it look like to live your life like you’re playing a board game?
Sneakily, that’s what my dissertation’s really about.
At the heart of my dissertation is a distinction between self-interest and well-being, two ways of describing how my life goes for me.
But they think about other people’s role in my life very differently:
My self-interest considers others as just useful for me.
But my well-being also considers others as valuable for their own sake.
Most of my dissertation tries to argue that self-interest isn’t that great.
I might have self-interested reasons to “love” or “respect” or “care about” others, but only because doing so would be useful for me.
But those are some of the worst reasons to care about or treat others well!
The value of these goods, the reason a life without love or respect would be so devastatingly impoverished, isn’t just that I’m failing to capitalize on others’ usefulness for making my life go well! A fuller story would have to consider others as more than useful things. They have their own well-beings, which are deeply enmeshed and interconnected with mine.
Plus, when we try to imagine someone who’s more and more self-interested, they seem more and more like a solipsist or psychopath or total asshole, none of which seem like The Good Life…
Okay, so it looks like you should care about your well-being instead of your self-interest.
But your well-being involves many different kinds of value that are in tension with one another. And as a result, it doesn’t seem like your well-being is a single quantity you can maximize.
Like, what would it even mean to say I’m making my life go as well as possible for me? Is there a single way of living that would be Best for me?
Or even weirder, are multiple ways Perfectly Tied, and I might as well flip a coin to decide whether to become a doctor or a lawyer?
Let’s back up for a second. What are the stakes here?
Well, if we should care about well-being instead of self-interest, but well-being isn’t something we can maximize, economics is in trouble.
After all, the core idea of economics today is that being rational simply means maximizing my self-interest.
To be charitable, this primitive model of what’s valuable and how we should think is meant to be a useful tool for describing, predicting, and evaluating the choices real people actually make.
Like in any model, homo economicus’s bizarro versions of human “value” and “psychology” are simplified. That’s the whole point: We can’t grok the world itself, so we construct a simplified model, see how it works, and then try to use that model to grok the world better.
That’s the case for economics being anything like a science, by the way—whether its models help us grok the world better, or merely justify a proudly cynical ideology pretending to describe how people think and what ultimately matters.
Economics’s rather tenuous claims to being scientific rest on how well its practitioners use their observations of the world to adjust their theories in an ongoing back-and-forth where their models’ predictions are tested against real-world outcomes, and the results are then used to help improve the models.
I’m not saying that never happens—it obviously has. But it might not be happening…radically enough.
Take a look at economic’s understanding of rationality again. The reason you’d make a definition of rationality in the first place is so you could criticize irrational behavior as being somehow faulty.
Okay, so according to economics, there’s only one reason you can ever criticize someone’s behavior: Failing to maximize their own self-interest.
Every moment I’m not maxing out my expected value—E(X)—I’m behaving suboptimally.
Admittedly, we could turn to what’s called a satisficing principle, and say that being rational just means doing well enough on this metric.
That’s nice. So I can’t criticize your behavior as long as you’re doing well enough on this metric of…how your life goes for you, treating everything and everyone else’s value as just useful for you?
In real life, there’s not just one value to maximize. Even my well-being involves many different kinds of value in conversation with each other. (And by the way, my well-being is not the only thing I should care about!)
So what do we do if we can’t calculate our way to find the right thing to do?
That’s what I’m working on right now. And the first part is making this problem clear: Ethics isn’t math.
So I’ve been writing a public-facing manifesto called Board Game Ethics—at least for now. It works to articulate what’s wrong with treating real-life decisions like the simplified and optimizable decisions of a board game.
It’s under 30 pages long (so non-academic philosophers might actually read it) and I’m trying to make it fun! But I need your help.
I don’t have a presentable draft just yet, but I will soon. And if you’re interested, sign up below! I’d love to send you a sneak peek and hear your early feedback as I continue to shape this project:
You touched on two problems rampant in all fields: 1) experts losing touch with the beginner’s mind and 2) humans failing to adjust based on outcomes of their theories in the harsh reality of life.