It starts, like all psychology papers, with a competitor for the most boringest sentence ever:
One of the essential insights from psychological research is that human information processing is often biased.
Glad we’re all on the same page.
Anyway, in psychology, if you can demonstrate any interesting sort of bias with a statistically significant result, that’s a paper, so you should probably get it published.
But then we should probably also publish papers that go back and look at all these individual studies and try to group them by type and go, well, what sorts of biases are there?
Maybe we can group them in terms of a few fundamental underlying beliefs?
This is one of those papers:
Anyway this paper is called
“Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases” by Aileen Oeberst and Roland Imhoff.
So you know this paper will be badly written because the title has a colon.
(I’m gonna get in trouble for that one.)
But it does show all the worst aspects of academic writing:
It’s 18 words long.
I can’t remember it.
The first 5 words aren’t even a title, just a prepositional phrase gesturing towards what the paper hopes to help us achieve…eventually?
Then the authors go, wait—put a colon: I gotta go off on this.
The next 13 words are a giant noun phrase that (no doubt) accurately describes the content of the paper. I can’t wait to know what the hell it means.
Just title your paper already! I barely found it again to write about on my blog.
Here’s what’s amazing about this paper:
Everyone who sees that table goes, yeah that sounds like me.
1. I totally assume my experience is a reasonable experience.
even when a judgment or task is about another person, people start from their own lived experience and project it—at least partly—onto others as well
What else am I gonna do, assume I’m an unreasonable freak and start trying to imagine everything from some other perspective?
2. I totally assume that I make correct assessments of my world.
biases such as the bias blind spot and the hostile media bias are almost logical consequences of people’s simple assumption that their assessments are correct
Look, whatever I’m doing has gotten me this far, hasn’t it?
(Truth be told, I’m actually pretty bad at this one.)
Obviously, there are several variations of the belief to be good—depending, for instance, on the domain or dimension that is evaluated (e.g., morality, competence) and the specific context (e.g., in a game, at work, on weekends).
If you’re having trouble like me, maybe look to the past: at least you’re not pro-slavery, right? Phew. We have really good reasons to think that at least our moral views have improved relative to folks’ back then.
4. I totally assume that my group is a reasonable reference.
What else am I supposed to do?
Stay in a community I think is weird?
Hang out with strangers all the time?
5. I totally assume my group (members) is (are) good.
(God why would you write it that way?)
It sure is nice being on the right side of history, where at least we know things like “slavery is wrong” now.
6. I totally assume people’s attributes (not context) shape their outcomes.
That’s just the kind of person I am.
Haha. Wait…
Why should I believe any of these things?
Why should I assume my experience or my group is reasonable?
This isn’t just like, maybe red for me is green for you or whatever.
Here in the U.S., there are lots of people who disagree with my fundamental world view. And with yours.
For all their weird sameness, the Democratic and Republican parties are separated by two warring cosmologies that disagree deeply on climate change, on science, on medicine, on standards of morality and common sense.
How to say this? At least one party is getting it seriously wrong.
Why should I assume I make correct assessments of the world?
We don’t even have to reach for Shutter Island-type thought experiments to get this going. Maybe the other party’s right.
(If this seems hard to imagine, remember they feel the same way about you.)
Why should I assume I or my group is good?
Didn’t the Nazis?
And do people’s attributes (not context) really shape their outcomes?
Maybe I’d be a very different sort of person, believing very different sorts of things, under other circumstances.
But the thing is, if the paper’s right, in five minutes you’ll probably go back to assuming all these things again.
You might not be able to change these aspects of yourself if they’re such deep, fundamental beliefs. What even counts as evidence that my experience is reasonable? Or that I am good?
Maybe I’m radically wrong about all of these.
Great. Now what?
One of my favorite philosophers, Charles Taylor, wrote this amazing paper called “Explanation and Practical Reason.” (Here’s a free link that actually works! But it’s ugly as hell.)
He goes look, we just can’t totally eliminate the possibility we’re wrong. There’s no way to 100% prove we’re right from the ground up. But we can change our value perspectives in ways that are clear improvements.
Taylor starts by thinking about Nazis, against whom reason seems ineffective. Surely I can’t argue the Nazi out of it.
But, Taylor says, that doesn’t mean reason is powerless. After all, surely we have good reasons for thinking the Nazi’s getting it wrong.
So how would you show them?
Well, an argument, no matter how good, isn’t an irresistible weapon you can hit someone over the head with and knock them out:
If ‘showing them’ means presenting facts or principles which they cannot but accept, and which are sufficient to disprove their position, then we are indeed incapable of doing this.
Instead, we try to show that a transition from their view to ours counts as a clear improvement. If Taylor is right that “there are limits to what people can unconfusedly and undividedly espouse,” the task here is to show up all the special pleading. (Well we have to kill these people because...[bullshit].)
Psychologically, this probably still won’t work on the Nazi. But if it does, Taylor makes a really striking claim:
Changing someone’s moral view by reasoning is always at the same time increasing his self-clarity and his self-understanding.
I think we should read him as going: We recognize one value perspective as an improvement over another because it helps us understand our own commitments better.
So take the shift from Aristotelian to Galilean physics. Galilean physics was built to account for the problems Aristotelian physics had accounting for projectile motion. And it’s good at it.
I guess you could go, well Galilean physics is better at making mechanical predictions. And Aristotelian physics is (arguably) better at attuning us to our place in the cosmos. Each does better by its own standards—so it’s a tie!
But that’s not reasonable. Aristotelian physics is dead, and rightly so.
What can we say instead?
What makes Galilean science better?
1. Making sense of your previous difficulties
Galilean physics can help explain the problems Aristotelian physicists kept running into.
Once you take inertia as basic rather than stillness, projectile motion is way easier to explain. And looking back, you can see just how hard it would be to try to identify hidden forces pushing the soccer ball all the way along its arc, or to predict it would move in an arc at all. You can understand why astronomers might start trying weird things like adding more epicycles.
It may be that from the standpoint of Y, not just the phenomena in dispute, but also the history of X, and its particular pattern of anomalies, difficulties, makeshifts, breakdowns, can be greatly illuminated. In adopting Y, we make better sense not just of the world, but of our history of trying to explain the world, part of which has been played out in terms of X.
This paper’s so cool.
2. Making progress that the old way can’t account for
On the other hand, Aristotelian physics had no way to make sense of the tremendous successes of the new Galilean physics.
The ability to do cool stuff, to improve our mechanical prediction, manipulative capacity, and technological prowess, had always been part of what good science was supposed to be able to offer us. But why is Galileo’s system so much better at mirroring and predicting our real-life observations?
Don’t ask an Aristotelian—they don’t have any good explanations.
“Pre-Galilean science died of its inability to explain/assimilate the actual success of post-Galilean science, where there was no corresponding symmetrical problem.”
Better not to look through that telescope at all, Your Holiness.
3. Making intrinsically error-reducing moves
Sometimes a move, by its own nature, makes it less likely we’re screwing up. Here Taylor shifts gears to give a range of pretty fun examples:
He talks about being surprised when he walks into a room and rubbing the rheum from his eyes to get a better look at what he’s just seen.
He talks about the moment where we clear up a confusion, and immediately realize why we had felt so lost before. Imagine a confused lover, Joe, who comes to realize that he can love and resent Anne at the same time—wow, that sure makes better sense of his experiences!
While we’re doing pretend psychoanalysis, imagine Pete, an unruly child who eventually discovers and disavows an unconscious belief that he’s underappreciated by his family, and, having experienced moral growth, starts behaving better.
The transition from X to Y is not shown to be a gain because this is the only way to make sense of the key consideration [like how well Galilean physics works]; rather it is shown to be a gain directly, because it can plausibly be described as mediated by some error-reducing move.
Okay great, so what do we do with all these moves?
Well if psychology has taught us anything, it’s that we’re cognitive misers.
We are super lazy problem-solvers (thinking takes energy) so our default in pretty much every situation is gonna fall right back into, I’m right and I’m good.
But the most careful and curious thinkers, the folks I really respect and admire, all work hard to fight this tendency constantly. They’re always looking for ways they might be wrong, for new perspectives or insights that might lend useful resources for increasing their own self-understanding. They want to feel confused—it’s the first step to making forward progress and understanding themselves better in the long run.
In other words, they keep doing philosophy forever.
No wonder I ended up in grad school again.
But what’s the alternative? If you ever stop doing this, you stagnate. And you’ll get left behind in our public conversations about our values. You had certain ideals when you were twenty, and now you’re seventy and the world looks super different. What happened to everyone! Why don’t they see things the way you did fifty years ago?
It’s not that you’re wrong and they’re right—that’s just being a cognitive miser again in reverse! You definitely shouldn’t agree with every new development that comes along and just accept it as an improvement.
But you do need to be aware of these challenges to old ways of thinking, and conversant enough with them to understand why they exist, to recognize their strengths and weaknesses, to tangle with them deeply enough to come out the other side with increased self-understanding.
Philosophy grad school really hammers home how powerless argument is against 1) committed opponents deploying increasingly sophisticated countermeasures, or 2) bored undergrads who just don’t care. I never know how to change people’s minds.
But maybe I can convince some folks to try a bit harder at changing their own.
Comments