I am a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. My research
focuses on the societal impact of AI. I have previously worked on AI in the industry and academia at
Facebook, Columbia University, and EPFL Switzerland. I am a recipient of a best paper award at ACM
FAccT, an impact recognition award at ACM CSCW, and was included in TIME's inaugural list of the 100 most
influential people in AI.
I am currently co-authoring a book on AI Snake Oil with
Arvind Narayanan. The book looks
critically at what AI can and cannot do. We're sharing our ideas through substack. Subscribe here.
I have investigated and offered mitigations for the reproducibility crisis in machine-learning-based science.
I have helped uncover several errors and conceptual shortcomings in evaluations of generative AI.
My work distinguishes between applications of AI that work, those that don't, and how to tell the difference. In particular, I have closely investigated the harmful impacts of predictive optimization for decision making.
My recent work looks at contemporary questions in AI safety through an evidence-based perspective. I have looked at the impact of open foundation models, the need for researcher access, and the futility of focusing on model-level interventions in meaningfully improving safety.
While the societal impact of foundation models is rising, transparency is on the decline. My work aims to increase transparency into the impacts of AI through concrete and rigorous transparency reporting.
I aim to bridge the gap between technology and policymakers. To that end, an explicit goal of my research program is to influence and inform evidence-based AI policy. My work has been cited in numerous government reports and other outputs.