Press

Various media outlets have quoted me or reported on my work on topics related to AI, ethics, safety, and policy.

« Home

Title Publication Date
Book review AI Snake Oil: What Artificial Intelligence Can and Cannot Do Harvard Gazette October 2024
Book review Seeing the Forest Through the A.I. Trees Air Mail October 2024
Book review Popping the AI Hyperbole Bubble The Deal October 2024
Book review Why AI isn’t as clever – or as dangerous – as we think The Telegraph October 2024
AI Hype Ray Kurzweil Still Says He Will Merge With A.I. The New York Times October 2024
AI AI Snake Oil: Exposing The Truth Behind Overhyped Claims NDTV October 2024
Excerpt AI Snake Oil, excerpt Stanford Social Innovation Review October 2024
Hype Generative AI Hype Feels Inescapable. Tackle It Head On With Education. WIRED September 2024
Hype Professor Arvind Narayanan and Sayash Kapoor Explain AI Princeton Alumni Weekly September 2024
Excerpt Snake Oil: Don’t believe the artificial intelligence hype Financial Review September 2024
Book review A new book tackles AI hype – and how to spot it Science News September 2024
Hype Arvind Narayanan and Sayash Kapoor on AI Snake Oil Princeton University Press September 2024
Excerpt Princeton SPIA AI Experts Separate Hype from Substance in New Book Princeton School of Public and International Affairs September 2024
AI AI Snake Oil: Separating Hype from Reality Tech Policy Press September 2024
Book review In the Age of A.I., What Makes People Unique? The New Yorker August 2024
AI 'AI Snake Oil' Sorts Promise from Hype Practical Ecommerce August 2024
AI Chatbots Are Primed to Warp Reality The Atlantic August 2024
Safety Is AI too dangerous to release openly? Princeton Engineering May 2024
Reproducibility Science has an AI problem: This group says they can fix it ScienceDaily May 2024
Safety Experts call for legal 'safe harbor' so researchers, journalists and artists can evaluate AI tools VentureBeat March 2024
Safety Top AI researchers say OpenAI, Meta and more hinder independent evaluations Washington Post March 2024
Safety Researchers, legal experts want AI firms to open up for safety checks Computer World March 2024
Openness and Safety Stanford study outlines risks and benefits of open AI models Axios March 2024
Openness and Safety A Mistral chills European regulators Politico March 2024
AI What are LLMs, and how are they used in generative AI? Computer World February 2024
Hype Computer Science Researchers Call Out AI Hype as 'Snake Oil' Princeton Alumni Weekly December 2023
Hype Computer Science Researchers Call Out AI Hype as ‘Snake Oil’ Princeton Alumni Weekly November 2023
Ethics OpenAI's ChatGPT turns one year old; what it did (and didn't do) Computer World November 2023
Reproducibility Artificial intelligence is not a silver bullet NPR December 2023
Transparency AI's Spicy-Mayo Problem The Atlantic November 2023
Hype Computer Science Researchers Call Out AI Hype as 'Snake Oil' Princeton Alumni Weekly November 2023
Transparency AI Is Becoming More Powerful—but Also More Secretive WIRED October 2023
Hype How Does AI 'Think'? We Are Only Starting to Understand That The Wall Street Journal October 2023
Transparency The world's biggest AI models aren't very transparent The Verge October 2023
Transparency Maybe We Will Finally Learn More About How A.I. Works The New York Times October 2023
Transparency We Don't Actually Know If AI Is Taking Over Everything The Atlantic October 2023
Transparency Klobuchar Says AI Regulation Still Possible Before End of Year Bloomberg October 2023
AGI Why everyone seems to disagree on how to define artificial general intelligence Fast Company October 2023
Transparency OpenAI Is Human After All; Sharing Is Caring, Researchers Tell Model Developers The Information October 2023
Transparency How transparent are AI models? Stanford researchers found out VentureBeat October 2023
AI Newsletter helped us dissect fake claims about AI in real time: Indian duo on TIME magazine's list of most influential voices in AI Indian Express September 2023
AI Prominent AI fairness advocates among Princeton AI luminaries The Daily Princetonian September 2023
Hype Princeton University's 'AI Snake Oil' authors say generative AI hype has 'spiraled out of control' VentureBeat August 2023
Tthics OpenAI Worries About What Its Chatbot Will Say About People's Faces The New York Times July 2023
Reproducibility GPT-4: Is the AI behind ChatGPT getting worse? New Scientist July 2023
Safety Tips for Investigating Algorithm Harm — and Avoiding AI Hype Global Investigative Journalism Network July 2023
Reproducibility Six tips for better coding with ChatGPT Nature News June 2023
Regulation The White House AI R&D Strategy Offers a Good Start – Here's How to Make It Better Tech Policy Press May 2023
Hype The AI backlash is here. It's focused on the wrong things. The Washington Post April 2023
Safety What is needed instead of an AI moratorium (translated) Tagesspiegel Background March 2023
Safety Here are 5 reasons people are dunking on that call for a 6-month A.I. development pause Fortune March 2023
Hype The AI factions of Silicon Valley Semafor March 2023
Evaluation Why exams intended for humans might not be good benchmarks for LLMs like GPT-4 VentureBeat March 2023
Ethics ChatGPT: AI bot 'saw test papers before acing exams' The Times (UK) March 2023
Hype ChatGPT is used to automate bullshit (translated) Süddeutsche Zeitung March 2023
Hype What if ChatGPT could make us less gullible? World Economic Forum February 2023
AI 7 problems facing Bing, Bard, and the future of AI search The Verge February 2023
Reproducibility The reproducibility issues that haunt health-care AI Nature January 2023
AI Hello, ChatGPT—Please Explain Yourself! IEEE Spectrum December 2022
Hype OpenAI's ChatGPT Is Seen as a Path-breaking Chatbot for AI–But Experts Are Not Impressed Indian Express December 2022
Hype The Artificial Intelligence Field Is Infected With Hype LA Times October 2022
Labor impact Will AI Make Artists Obsolete? Prospect Magazine October 2022
Reproducibility Scientists are sloppy with machine learning (translated) NRC August 2022
Hype Dangerous overoptimism (translated) Frankfurter Allgemeine Zeitung August 2022
Reproducibility Sloppy Use of Machine Learning Is Causing a 'Reproducibility Crisis' in Science WIRED August 2022
Reproducibility Could Machine Learning Fuel a Reproducibility Crisis in Science? Nature July 2022