top of page
Search
Writer's pictureRicky

Quick Hitter: How do you understand Confusion?

There are lots of buzzwords in the AI space—fairness, transparency, safety, bias, privacy, trustworthiness, I could go on and on.


And it’s not really clear what anyone means by any of them.


We disagree wildly across fieldsfairness means incredibly different things to an economist, a computer scientist, and a philosopher.


And we disagree quite a bit within fields, too. It’s not as though every philosopher is using the same definition, and every computer scientist the same metric!



We have no idea how to talk about AI productively, which isn’t that surprising.


But we often don’t even realize when we’re talking past one another!


I don’t think we need some philosopher to descend from on high with stipulative definitions for how to use words Correctly. The place to begin is a lot earlier, with qualitative study of a descriptive question:


How do people actually use these words in face-to-face conversation?


Sure, I guess I’m interested in how these terms are used in legislation, or in highly-cited papers. But those are both very different contexts of use.


  • Legislation is drafted by teams of folks with very different purposes and constraints than researchers. If you don’t think the text of a law is a site of political negotiation and compromise, my God, I don’t know what to tell you.

  • Papers can become highly-cited for all kinds of reasons: because everyone mentions that paper for its technical innovation, or its groundbreaking status, or because they all think it’s wrong, or even as a more-or-less empty gesture of deference—I’m working on that thing, you know, what they said!


But I want more. I wanna know how folks in this space actually use these terms in their everyday working lives, to figure out what work these words do for them.


Sometimes, the truth may be disappointing, or just a bit flat.


For example, I’ve come to think the word ‘trustworthy’ is often stripped of much of the deep ethical content you might expect to mean something more technically tractable, like ‘reliable’ or ‘predictable,’ though I still need to trace this word down more fully.


  • Where did this word come from? (A contrast with ‘deceptive’? Or ‘dangerous’? Or…?)

  • What practices have grown up around this word?

  • How are these practices internally organized?

  • How related are these practices to one another?


I don’t know!


I think reading some ethnomethodology has really inspired me to see what different practices look like on the ground, and to study the functional deployments of these terms in particular contexts.


The problem is there are so many buzzwords, and so many contexts of use, it’s hard to know where to start.


But that’s what I’m thinking about. Actually, I’m thinking about the talk on AI and Democracy I have to give at 8:30 a.m. tomorrow morning. (By the time you read this, it’s already done and I’m sure it went great.)


But I think there’s no point trying to decide what conversation folks should be having without getting involved in the conversations they are having first.


Why not see where folks are, and try to suggest improvements from there? That seems way more likely to be helpful and lead to uptake than dreaming up a whole new way of speaking from scratch on my own.


So that’s why I’m reading Bruno Latour, who I’m sure I’ll write about soon. What better way to try to fast-track becoming a philosophical anthropologist of AI?


More on this soon, I still gotta cut more slides…

Related Posts

See All

Commentaires


Sign up for more philosophy in your life!

Thanks for subscribing!

bottom of page