top of page
Search
  • Writer's pictureRicky

What makes You think you’re so Special? (a reluctant defense of X-Risk)

I spent this past week at The Midwest Ethics Symposium on AI presenting my research and meeting lots of great folks from academia and industry.


SO let’s do a deep dive on the state of popular AI discourse.


There seem to be two broad camps:

  1. Tech Bros think AI will become all-powerful and either lead us to Utopia or Kill Everything within our lifetimes. That means we need to put enough money into AI research to make sure we create heaven on earth instead of hell.

  2. Social Critics think these concerns are a distraction from the real threats of AI: invading digital privacy, extending social injustices, concentrating capital, accelerating climate change, and so on.


We didn’t have many Tech Bros at the conference. And while I definitely tend to side with the Social Critics, I think both sides are getting something right.


So today I’ll 1) explore why existential risks are not 100% bullshit, and 2) speculate a bit on why sharp social critics seem so eager to discount them, over and above their justified skepticism of the ridiculous AI hype and doomerism.


Let’s jump in.


Part 1: A reluctant defense of X-Risk


Nick Bostrom, the controversial philosopher of AI whose Future of Life Institute was just closed and also Oxford took his profile page down...God I’ll try to spare you all the lore, let’s start over.


Nick Bostrom defined x-risks in an influential paper that you could read on his website, but I’ll summarize it now. An x-risk is an existential risk, a risk


where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Beyond things like runaway global warming or extinction, he gives a few other examples including “dysgenic pressures.”


Wait, what?!


It is possible that advanced civilized society is dependent on there being a sufficiently large fraction of intellectually talented individuals. Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (“lover of many offspring”).

You might be saying, “What the fuck, dude!” Bostrom’s worried that homo philoprogenitus wouldn’t be smart enough to colonize space and maximize utility across the accessible universe. So that’s why he thinks human potential would be permanently and drastically curtailed.


And this kinda shit is why so many clever folks think the x-risk stuff is all fake. As the kids say, his arguments look pretty damn sus. So why worry about the more fanciful scenarios where we lose control of an AI and it destroys us (maybe on purpose, maybe not)?


I’ve mentioned before that our historical definitions of intelligence tend to focus on one really simple question:


Can you figure out how to do stuff?


So intelligence measures your effectiveness at achieving goals.


But you might be a very effective supervillain or paperclip maximizer. Less fancifully, sophisticated assholes might be very effective at pursuing their own self-interest by treating everything and everyone else around them as just useful for them.


But are your goals any good?


Well, how wise are you? As Bostrom puts it in Superintelligence, wisdom is “the ability to get the important things approximately correct.” (Presumably supervillains and paperclip maximizers and assholes aren’t really doing that.)


But here’s what’s tricky: When you make something smarter, that just means you’re making it more effective at doing stuff. You may not know how it does things, even after it's done them. And you may not know the limits of what it can do.


As you make something smarter, you lose the ability to predict or control it.


We hardly understand the AI we already have. Black box methods like deep learning break in weird ways, and we can’t really explain why they work as well as they do. We keep making the models bigger, and they keep doing better. How long will that trend continue? No one knows!


So the bigger models are more intelligent—they’re able to do more. They’re better at achieving the goals we’ve given them, like predicting the next word accurately or writing a lost Neil Breen screenplay. As they grow, these models become more complex in both their structure and behavior.


Here are two very cool terms in the AI literature:


  • Specification Gaming is a behavior that satisfies the literal specification of an objective without achieving the intended outcome. So you give an AI the goal to move fast in a physics simulator, hoping it will design itself a fast body. Instead, it creates an extremely tall body and decides to fall over. Timber!



Maybe you can start to see why it’s gonna be really hard to give an AI enough wisdom to get the important things even approximately right. Tiny, subtle misspecifications of its goals could be disastrous!


  • Instrumental Convergence is when intelligent beings tend to pursue the same useful subgoals, like acquiring resources, or learning to defend yourself, or being able to manipulate others. Achieving these goals will preserve and extend your ongoing success across domains, so intelligent beings tend to become good at them.


At one extreme, a very intelligent AI might be so good at self-defense it would be impossible for us to shut down without its cooperation.



Even if you think that’s fanciful, these models are being developed by companies like Facebook and Google (I refuse to say your new pretend names). Do you trust these institutions’ incentives? Remember, they’re in it to make money by learning everything they can about you and disrupting as much of the economy as possible. And they’re telling you this in the New York Times.


Do you trust the models Silicon Valley is rushing out not to do tremendous harm, not to break in a million bizarre ways that might permanently and drastically curtail our species? The carbon costs alone are pretty dire. These are “Move Fast and Break Things” companies, so even if the Facebook Supercomputer works right, it’s still basically a supervillain. No matter how many safety experts these companies hire, the incentives are all wrong under capitalism.


Under these racing conditions of technical development, why not expect AI’s harms to spiral out of control? How confident are you that we things won’t go terribly wrong as we make black boxes better and better at doing stuff in ways we hardly understand?


I’ll say it again:

As you make something smarter, you lose the ability to predict or control it.


Existential Risks are not 100% bullshit.




Part 2: What makes You think you’re so Special?


The Social Critics get a lot right:

  • They see the sloppy moral and philosophical reasoning from so-called ‘rationalists’ who promote reckless AI development and/or apocalyptic doomerism, sometimes at the same time.

  • They see the bad incentives under capitalism, and how the morally loaded language of progress is used to disguise and paper over the unscrupulous intentions of tech leaders.

  • They see the maximally exploitative profit-chasing practices that pollute the environment, use so much water, and traumatize underpaid African workers.

  • They see the disappointing results from effective altruist efforts (like bed nets) and heroes (like Bankman-Fried).

  • They see that it’s the AI companies whose goals are radically misspecified: They’re inflating projected valuations by accelerating disruption instead of producing genuine social benefits in responsible or sustainable ways.

  • And meanwhile, the effective altruists bought a $15m castle? These are your moral thought leaders preaching the wise use of resources?

But I think there’s something else going on, too, that helps explain why so many Social Critics blow off x-risks as silly or impossible.


There’s a certain kind of person I need to come up with a name for. I think a fair number of the Social Critics are Human Exceptionalists.


Human Exceptionalists think there’s something special about being human that AI will never be able to capture or match. (Does this sound like you?)


Economic human exceptionalists think there’s something economically special about human beings. Previous technological advances have always led to more and better jobs—why should things be any different this time? We will always be more intelligent in some respect. AI will never be as creative/perceptive/intuitive/etc. as a human. That’s why we’ll always connect more with humans. You’ve heard stuff like this.


I think those are implausibly optimistic takes about how special we really are.


Moral human exceptionalists think there’s something morally special about human beings. No matter how sophisticated it gets, AI will never be an agent/moral equal/human person/etc. There’s something morally lacking. (Maybe AI can only mimic intelligence. After all, the word ‘artificial’ is in the name, it’s not really intelligent!)


I think the moral exceptionalist line is a lot trickier to argue than it might seem—maybe I’ve just engaged with too much sci fi—but the arguments here are much more respectable. And remember, it’s often the Social Critics’ deep concern for the suffering and dignity of other human people that leads them to engage in social criticism in the first place!


But I think (some!) Social Critics seem to confuse these two. Maybe you’re a moral human exceptionalist—you just think there’s something morally special about humans, dammit. And then, very understandably, you hope there’s something economically special about humans too, such that we don’t have to worry about becoming vulnerable to or outcompeted by AI.


After all, so much media portrays us at the top of the food chain. The computers in The Matrix don’t kill us and then can’t stop us from escaping. The Terminator can’t beat John Connor. Turning to aliens, in War of the Worlds or X-Com the aliens fly all the way here and then lose to human armies. In District 9 things go so bad when the spaceship fails that alien apartheid is established in South Africa.


What makes You think you’re so Special?


If you believe in souls, I’m not here to argue you out of it! I just want to clarify what work you think the soul is doing.


When you can achieve (or realize) the same goal in multiple ways, we say your goal is multiply realizable. Human-level intelligence is probably multiply realizable—there are probably lots of ways of getting there, human brains being just one.


The most brute-force path to develop human-level AI is to simulate a human brain particle-for-particle. If the simulation is functionally equivalent to your brain, why would the simulated brain be any less creative or perceptive or intuitive? What if we ramped up the simulation’s processing speed or memory? Would your soul make you unbridgeably better in figuring out how to do stuff? Probably not.


But could an AI meet or beat you in terms of moral status? That’s trickier!


In fact, it’s controversial whether such a simulation would even have moral status, so moral human exceptionalism looks a little more defensible here.


Human beings are intelligent and morally valuable creatures. If you believe in souls, maybe that makes us morally special. But it probably doesn’t make us uniquely intelligent.


So yes, we’re vulnerable, and eventually we could lose control or be outcompeted by the things we’re irresponsibly building as fast as possible. Doing a lot of harm isn’t so hard in a world where we’ve already figured out nukes. It’s easier to shit on the floor than to clean it up, and we humans don’t get plot armor in real life.


So there’s my reluctant defense of why x-risks are not 100% bullshit. Happy to hear from you now that I’ve pissed everyone off.

Related Posts

See All

Sign up for more philosophy in your life!

Thanks for subscribing!

bottom of page