top of page
Search
  • Writer's pictureRicky

If you say “Black Box” again I’m escaping to the Cloud

Just a short one—I’m in Orono, Maine today at a conference on Agency and Reasoning in Games.


Tomorrow, I’ll be giving a talk on board games as a metaphor for metrics and bureaucracy. I still need to write up 30 minutes’ worth of slides. (Whoops.)


But first, let me try to articulate something that keeps pissing me off.


Time and time again, folks point to the Black Box Problem:

If we can’t explain how AI works, how can we trust it?


Schematically (humor me, I’m a philosopher):

  1. We can’t explain how AI works.

  2. Therefore, we can’t trust AI.


I accept (2): We can’t trust AI. This is all brute-force technology being rushed to market by profit-seeking entities with the worst of intentions without due safeguards or caution, etc. etc.


But we can’t explain how our own brains work, either.

So…can we not trust humans?


Let’s back up for just a second.


We know how things can go wrong with a hammer: Maybe I hit my thumb, or maybe the handle breaks and the head goes flying. Hammers are simple tools we have lots of experience with.


And we know lots of ways things can go wrong with humans: As they age, they might lose mental sharpness and awareness; when they experience trauma, they might freeze, fight, or flee, and they’ll carry that trauma with them, maybe forever. Humans are complex agents, but we have lots of experience with them.


Over thousands of years, we’ve developed sense-making practices like storytelling to articulate and shape our experiences with other humans. (”I’ll be okay, it’s just the depression talking…”) Today, we know how to talk about intentions and assumptions and subconscious desires and concussions and mental illnesses and sure, none of this talk is perfect, but my God it’s much more sophisticated and grounded in real experiences than our tools for talking about AI!


(The fact we call ChatGPT making stuff up “hallucinations” says a lot…talk about projection!)

I’ll be the first to add that the DSM is basically a hodgepodge of symptom clusters that insurance companies will recognize. But we still have a pretty good sense of what threats humans pose. They don’t grow 1000 feet tall or shoot psionic death rays, and they rarely become serial killers. We have enough experience with other humans to enter a coffee shop with (relative) confidence.


So, the real problem isn’t that the internal mechanics of AI are opaque.

It’s that we’ve barely experienced their external behaviors!


These technologies are already powerful and are being developed as fast as possible. We won’t get to have millennia of experiences with ChatGPT-4 before the next version, with way more parameters and training compute, is upon us, and of course it will act way differently.


I almost wanna say, revolution is outpacing evolution, but that gives this AI boom too much credit. Honestly, we’re just throwing way more energy and water and compute and human misery into training relatively simple, brute-force mechanisms at unimaginable scale. And we’ll keep going for however long their performance seems to keep scaling, because investors like that.


Cut to us training immense pattern boxes and then acting surprised when they end up biased or overfitted. God only knows what other problems are possible.


So sure, we don’t understand how AI works very well. But the solution isn’t “glass box” AI that “shows you how it thinks”—I heard a very respected physician say that this week and it pissed me off enough that you’re reading this now.


Personally, I’m less worried that we don’t understand how AI works.

(We don’t know how Tylenol works, either.)


I’m way more worried that we aren’t getting enough experience with how AI breaks before the next, more powerful model sweeps through.

Related Posts

See All

2 Comments


Guest
Sep 27

But it doesn’t (or maybe shouldn’t) have to be an either-or.


A recent episode of Derek Thompson’s “Plain English” pod focused on how we know exercise is good for us but we know surprisingly little about the specific mechanisms through which exercise changes our cells and organs and muscles. Your same framing applies. But it’s still worth knowing the mechanisms because that can help figure out better ways to exercise, or an “exercise pill” which terrifies me a bit.


Put differently, I agree that it’s not that we need to know exactly how AI works, but knowing how it is working can help us understand why it works when it works and why it doesn’t work when it doesn’t.


Appropriate…

Like
Ricky
Ricky
Sep 28
Replying to

I think we do agree, and this comment was great!


It would be amazing to know more about the mechanics of how all kinds of things work, from medicine to exercise to human brains to AI--no disagreement there.


Here's another place I bet we agree: Tech companies' incentives and actions are horrible, they're launching ill-understood tech into the world ASAP to make $$$, leaving all of us to suffer weird risks and harms, both known and unknown.


That's transparently a recipe for disaster.

Like

Sign up for more philosophy in your life!

Thanks for subscribing!

bottom of page