About Me

I am a third-year PhD student in the MURGe-Lab at the University of North Carolina at Chapel Hill, where I am advised by Mohit Bansal. My work at UNC is supported by a Google PhD Fellowship and previously by a Royster Fellowship. Before this, I graduated with a bachelor’s degree from Duke University, where my thesis advisor was Cynthia Rudin. At Duke I was supported by a Trinity Scholarship.

My research interests center on interpretable machine learning and natural language processing. I am particularly interested in techniques for explaining model behavior and aligning ML systems with human values, which are problems I see natural language playing a prominent role in. I am broadly interested in topics related to AI Safety. In all of these areas, I find work on clarifying concepts and developing strong evaluation procedures especially valuable.

Email: peter@cs.unc.edu

News

  • 2021 - Awarded a Google PhD Fellowship for Natural Language Processing!
  • 2021 - Invited talk at CHAI, UC Berkeley, on Evaluating Explainable AI
  • 2021 - Paper accepted to EMNLP 2021: “FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging” [pdf] [code]
  • 2021 - Named as an outstanding reviewer for ACL-IJCNLP 2021
  • 2021 - New paper on arxiv! “Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals” [pdf] [code]
  • 2021 - Started summer internship at FAIR, supervised by Srini Iyer.
  • 2021 - New blog post on the Alignment Forum: “Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers” [link]
  • 2021 - New preprint on arxiv: “When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data” [pdf] [code]
  • 2020 - New preprint on arxiv! “FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging” [pdf] [code]
  • 2020 - Recognized as an Outstanding Reviewer for EMNLP 2020
  • 2020 - Paper accepted into Findings of EMNLP, “Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?” [pdf] [code]
  • 2020 - Paper accepted into ACL 2020, “Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?” [pdf] [code]
  • 2019 - Paper accepted into AAAI-HCOMP 2019, “Interpretable Image Recognition with Hierarchical Prototypes” [pdf] [code]
  • 2019 - Joined the UNC NLP lab
  • 2019 - Graduated with a B.S. from the Department of Statistical Science at Duke University
  • 2019 - Awarded the William R. Kenan Jr. (Royster) Fellowship from UNC Chapel Hill
  • 2018 - Received First Prize in Darmouth’s PoetiX Literary Turing Test (see submission [pdf] and [code])
  • 2018 - Nominated for Statistical Science Undergraduate TA of the Year