How can we build AI systems that optimize for human meaning?
Social media systems were designed to optimize for engagement. The content on these platforms gets evaluated according to likes, shares, comments, and views, and so this is the signal the algorithms that govern these platforms pick up on. This design decision has not, to put it mildly, turned out well. It has lead to polarized discourse, widespread loneliness, the destabilization of mental health, among much else.
With AI, we're on the precipice of a new era of powerful, societally-impactful sociotechnical systems. We need a better theory of what we want from this technology—better, at least, than the one on which we build social media.
What we might want, instead, is technology that can optimize for meaningful human experience. The problem is that we don't yet know how to do this. Engagement is easy to measure; meaning isn't. We need new ways of measuring meaning at scale. Then—and only then—can we say whether an AI system is generating more or less of it.
You can view a full list of my academic publications on Google Scholar.

We can't yet build AI systems that optimize for human meaning because we don't yet have good ways of measuring what's meaningful. As it currently stands, human meaning isn't legible to sociotechnical systems. We have to rely on metrics like engagement — which are then subject to proxy failure. In this paper, we describe a novel way of making human meaning legible at scale.
It builds on an influential framework from the social sciences called "Thick Description." It says that in order to capture human meaning you have to capture cultural context. The reason that thin metrics don't capture meaning is that they shave away the important contextual information.
We argue that the first step towards building systems that can optimize for meaning is to make meaning legible. This paper gives us a starting point for how to do that.