I am currently a postdoctoral research fellow at New York University working with Yann LeCun and Andrew Gordon Wilson. My research focuses on both applied and fundamental problems in machine learning:
Safe and reasonable AI - Modern AI systems contain biases, security vulnerabilities, expose users to privacy breaches, and exhibit catastrophic failures of reasoning and generalization. My work aims to detect and close these gaps.
Mathematical and computational tools for understanding and improving neural networks - Despite rapid advances in capabilities, our understanding of why neural networks work is highly limited. My research focuses on the structures in neural networks and their training procedures that enable them to generalize in practice.
My portfolio includes award winning work in Bayesian inference, generalization theory, algorithmic reasoning, and AI security and privacy. Our recent paper on model comparison received the Outstanding Paper Award at ICML 2022. Before my current position, I received a Ph.D. in mathematics at the University of Maryland where I worked with Tom Goldstein and Wojciech Czaja. Since 2020, 46 of my papers have been accepted at top CS venues (NeurIPS, ICML, ICLR, CVPR, AAAI, and TPAMI).
|Sep 21, 2023||9 papers accepted to NeurIPS 2023|
|Jan 20, 2023||8 papers accepted to ICLR 2023|
|Sep 14, 2022||7 papers accepted to NeurIPS 2022|
|Jul 17, 2022||Outstanding Paper Award at ICML 2022|
- Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision TasksAdvances in Neural Information Processing Systems (NeurIPS), 2023
- On the Importance of Architectures and Hyperparameters for Fairness in Face RecognitionAdvances in Neural Information Processing Systems (NeurIPS), 2023
- Bayesian Model Selection, the Marginal Likelihood, and GeneralizationInternational Conference on Machine Learning (ICML) Outstanding Paper Award, 2022
- Cold Diffusion: Inverting Arbitrary Image Transforms Without NoiseAdvances in Neural Information Processing Systems (NeurIPS), 2023
- Adversarially Robust Few-Shot Learning: A Meta-Learning ApproachAdvances in Neural Information Processing Systems (NeurIPS), 2020
- Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot TasksInternational Conference on Machine Learning (ICML), 2020
- Truth or Backpropaganda? An Empirical Investigation of Deep Learning TheoryInternational Conference on Learning Representations (ICLR), 2020
- Adversarially Robust DistillationProceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020
- Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent NetworksAdvances in Neural Information Processing Systems (NeurIPS), 2021
- LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial RecognitionInternational Conference on Learning Representations (ICLR), 2021
- The Intrinsic Dimension of Images and Its Impact on LearningInternational Conference on Learning Representations (ICLR), 2021