Research
We’re living in Early Cyberpunk, but we’re reasoning with antiquated ethical tools.
Every day, new technologies grow more transformatively powerful for the corporations and states who wield them, and more destructively consumptive of our common resources. As a result, we face genuinely unprecedented ethical challenges in Bioethics, AI, and Global Health.
​
Before we can make ethical progress, we need to understand the limitations of our existing ethical values and practices. Most ethical frameworks assume we can cleanly identify our values and weigh tradeoffs between them to find a solution. But what if the real bottleneck is our capacity to articulate what’s ethically at stake in the first place? After all, our ethical toolboxes are full of concepts like agency or fairness that haven’t been developed with AI or trillionnaire megacorps in mind.
I study how social institutions can fragment, constrain, and impoverish our ability to express ethical concerns about the technologies reshaping our world. These aren’t abstract philosophical problems—they’re built into how our institutions work.
Peer-Reviewed Research
Institutional Collisions
Fragmented Articulations​​​
Last year, my university lost track of me. A date-based system decided I had graduated, but a paperwork-based system lacked one administrator’s signature. With my institutional identity fragmented, I ended up without health insurance for almost a month.
​​​I study what happens when rigid social practices fail to account for each other’s existence. This happens all the time. You can’t get a bank account without an address, but you can’t rent without a bank account. Worse, many folks profit off of these institutional mismatches; some duct-tapers’ careers even depend on their existence! So what to do? Can we reassess the fixity of our institutions before we hitting the point of crisis? And how can we amend our complex institutions without introducing even worse institutional collisions down the line?
“Mutual Aid as Effective Altruism”
Kennedy Institute of Ethics Journal​​​​​​​,​
2024
On how effective altruists can become trapped on the firefighter’s treadmill.
On how political riots can ocularize deep grievances within the status quo.
AI Anthropology
Constrained Articulations​​​​​​​​​​​​​​
Most AI researchers don’t think of their work as highly social. But social forces decide who does and doesn’t get the compute and funding to keep working. As a result, social dynamics subtly constrain both the form and direction of AI research.
​
I study the social lives of AI technologies—how social forces shape them from development through deployment. Which papers become classics? Which metrics become standard? What counts as a “good” result, and why are most of these charts so lousy?! The answers depend on social micropractices that help police what counts as “good research practice.” But what researchers value does not always align with what practitioners actually need! Once AI tools are deployed, they encounter completely different social micropractices and priorities.
On the value of creating assignments to teach dialogical writing with LLMs.
​​​​​​​
Proposal under Review
On overlooked epistemic limitations when evaluating LLM-as-a-Judge.
“Value-Neutral” Language
Impoverished Articulations​​​​
Our terms for physician-assisted suicide are constantly shifting. Is it physician-assisted death, medical aid in dying, or medical assistance in dying? And what if pursuing descriptive accuracy has ethically impoverished our language?
​
I study the tradeoffs of such terminological merry-go-rounds. For one thing, patients’ loved ones end up asking why grandma might need help dying. But at a deeper level, our words are shared tools that shape our ethical thinking. I argue that our pursuit of “value-neutral” description at all costs not only overlooks the fact that the new terms quickly acquire connotations in usage. More strikingly, this normative hollowing of our language is actively undermining our ability to publicly engage with deep value questions inherent to matters of life and death.
On the hollowing of ethical well-being into narrow economic self-interest.
“What’s the Appropriate Target of Allocative Justification?”
AJOB Neuroscience​​​​​​​, 2021​
On the hollowing of caring for patients into economizing for QALYs.
​​​​​​​​Paper under Review
​
On how our historical opposition of work and play was inadequate long before AI.
In Progress
​
On how to describe and conceptualize the dynamics of institutional collisions.
In Progress
​
On how institutional momentum can keep widely-discredited metrics afloat.
In Progress
​
On how institutional gaps lead to a lack of accountability in clinical AI rollout.
In Progress
​
On how human democracy stacks up relative to AI epistocracy. A mixed bag!
​​​​​​​​Paper under Review
On how public AI discourse reflects denial of all-too-real political anxieties.
​​​​​​​In Progress
On how pursuing “good enough” results predictably channels AI research.
​​​​​​​In Progress
On what citation practices reveal about ethical reasoning in AI development.
​​​​​​​In Progress
On “optimizer” versus “crafter” subcultures in machine ethics.
​​​​​​​In Progress
On how qualitative researchers can begin leveraging LLMs responsibly.
​Paper under Review
​​
On the hollowing of broader social signaling practices into virtue signaling.
​Proposal under Review
On the hollowing of biographical life stories into biological life indicators.
​In Progress
​​
On the hollowing of contested notions of fairness into competing AI metrics.
​In Progress
​​
On the hollowing of terms for assisted suicide into deeply abstract word soup.
​In Progress
​​
On the hollowing of AI safety into the prevention of harmful text outputs.
Public-Facing Work
“Why should your students do the work?”
American Philosophical Association Blog
​
American Philosophical Association Blog Bioethics Series
​
“How to Form a Lasting Undergraduate Philosophy Club”
American Philosophical Association Blog
​​​​
“Coronavirus Is Everyone’s Problem, But Not Everyone’s Problem to Solve”
American Philosophical Association Blog
​​
Check out my blog called Rapid Fire, and my previous work at philosophy for humans!