top of page
Search
  • Writer's pictureRicky

The books Elon pretends he’s read


Apparently one of them is Superintelligence, the highly-influential book that AI moguls from Sam Altman to Bill Gates to Elon Musk all agree should shape how we think about the future of artificial intelligence.


In fact, here’s Elon’s testimonial:



Superintelligence (2014) is written by Nick Bostrom, philosopher-provocateur and founder of the Future of Humanity Institute, which recently closed under mysterious circumstances in the aftermath of Bostrom’s public cancelation.


Bostrom’s embroiled in several controversies, most infamously for saying shockingly racist things and then claiming to repudiate them a few sentences before going, but what is eugenics anyway? Are we sure it’s bad? And why couldn’t some races be stupider than others? After all, that’s not my area of expertise…


Folks, I’m here to let you know that eugenic fingerprints are all over Bostrom’s world-historically influential book on the future of AI, which has very much laid the blueprint for much of the AI hype and doomerism we face today.


And he just released another book. Hurray!



The weirdness goes beyond Bostrom’s explicitly ‘dysgenic’ concerns that we may grow too fertile and stupid to ever colonize space effectively. That would prevent us from maximizing happiness across the cosmos, which he considers a risk of existential proportions.


After all, Bostrom’s a longtermist: He thinks we should value the well-being of someone living 10 trillion years from now just as much as the well-being of anyone alive today.


But since there could be so many more future people than there are people now, especially if we manage to get the whole AI-colonizing-space thing going, we might be vastly outnumbered.


If so, tackling current injustices in our world can seem incredibly short-sighted and parochial. On a truly cosmic scale, these contemporary injustices will be over in the blink of an eye.


Far better to protect the far future...right?


Anyway, that’s why making sure we create Space Utopia fast (without fucking it up) is the most important thing we can do. Our existence is basically just useful for making sure that zillions of happy future people come into existence. Impartially speaking, their happiness vastly outweighs ours.


Tl;dr We gotta make AI so we can Terraform Mars ASAP.



Oh by the way, human beings might not fit into this optimized future.

(But fortunately, we’ll have the technology to change ourselves radically.)


PROBLEM 1:

Humans are inefficient citizens of Utopia.


Human psychology traffics in anxiety and depression and anger and tons of other other ‘negative’ emotions. We’re all saddled with the evolutionary baggage of having to survive pre-civilizational scarcity. But those negative emotions are basically wasted energy if our goal is to maximize happiness. In other words, our human psychology makes us inefficient converters of resources into happiness.


Plus, we could probably make way more happy future people if we created them in virtual reality rather than birthing them into the real world. Organic beings cost so much more energy to create and maintain than their digital counterparts! So our human bodies make us inefficient converters of resources into happiness, too.


PROBLEM 2:

Humans are ineffective designers of Utopia.


I assume I don’t have to convince you of the failures of human planning and political bureaucracy to bring about Heaven on Earth.


Here’s Bostrom’s diagnosis:

Humans and human institutions just aren’t smart enough.


After all, intelligence on Bostrom’s conception is just being good at figuring out how to do stuff.


So by stipulation, a superintelligent AI would be way better at planning and running Utopia than we would. It could figure out how to build enormous server farms and stuff them with as many virtual hedonists as possible. More likely, it would come up with an even better economic solution to the problem of maximizing future happiness.


But since intelligence is just figuring out how to do stuff, a superintelligent AI might also turn the cosmos into a paperclip factory. Being way smarter than us doesn’t mean it’s wiser. Hopefully, the AI applies its superintelligence in wise directions. But we have no idea how to ensure that happens.


By the way, this incredibly thin notion of intelligence, which Bostrom employs more or less uncritically, can be traced directly back to American eugenicists in the early 20th century with truly horrifying social and political motivations. Kids: the concept of intelligence is sus.


So one way of understanding Bostrom’s project is to go, what if human beings as a species were no longer smart enough to compete? What if we became the ‘dysgenic disappointments’ who just couldn’t keep up?


It’s worth noting that Bostrom’s real concern about extinction isn’t that human beings might go extinct, but that posthuman intelligence might. Human beings (in any recognizable sense) are weirdly disposable in the story he’s telling.



So yeah, that’s Bostrom.


I wanna take a closer look at this Ur-weirdo whose thinking informs many of the most powerful weirdos (Altman, Gates, Musk…) in our world today.


I’m planning to clip a few relevant passages from Superintelligence and other works by Bostrom and try to get a handle on what he thinks and why Big Tech thinks he’s right.


You’re invited to grab a coffee and come chat over Zoom!

Maybe we can wrap our heads around Silicon Valley’s apocalyptic visions of heaven and hell together?



I’d love to see what comes of a few informal Coffee Chats. Surely we can at least be a bit more insightful than Elon’s real blurb for Bostrom’s latest book:



Related Posts

See All

2 Comments


Brandon
Brandon
Jun 14

AI Central such a powerful card if you can hit by Gen 3 though.

Like
Ricky
Ricky
Jun 14
Replying to

Riddle me this: Why would pairing AI Central with Robotic Workforce cost me MORE energy?!

Like

Sign up for more philosophy in your life!

Thanks for subscribing!

bottom of page