Talks

Stanford NLP Seminar
“Controlling and Editing Knowledge in Large Language Models” [slides]

OpenAI
“The Unreasonable Effectiveness of Easy Training Data for Hard Tasks” [slides]

CHAI, UC Berkeley
“The Unreasonable Effectiveness of Easy Training Data for Hard Tasks” [slides]

Brown University
“Interpretable and Controllable Language Models” [slides]

Princeton University
“Interpretable and Controllable Language Models” [slides]

New York University
“Interpretable and Controllable Language Models” [slides]

University of Pennsylvania
“Interpretable and Controllable Language Models” [slides]

University of Oxford
“Explainable Machine Learning in NLP: Methods and Evaluation” [slides]

NEC Laboratories Europe
“Explainable Machine Learning in NLP: Methods and Evaluation” [slides]

National Institute for Standards and Technology (NIST)
“Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?” [slides]

Allen Institute for AI
“Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs?” [slides]

Uber AI
“The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations” [slides]

Center for Human Compatible AI (CHAI), UC Berkeley
“Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?” [slides]