Micah Goldblum

Math and ML. Trying to understand neural networks better.

I am currently a postdoctoral research fellow at New York University working with Andrew Gordon Wilson. My research is focused on both fundamental and applied problems in machine learning where I work on a broad spectrum of topics related to creating practical systems that are performant and reliable on real-world problems and at the same time understanding how and why such practical systems work. My portfolio includes award winning work in Bayesian inference, generalization theory, algorithmic reasoning, transfer learning, and AI security. Our recent paper on model comparison received the Outstanding Paper Award at ICML 2022. Before my current position, I received a Ph.D. in mathematics at the University of Maryland and worked as a postdoctoral research fellow with Tom Goldstein. Since 2020, 37 of my papers have been accepted at top CS venues (NeurIPS, ICML, ICLR, CVPR, AAAI, and TPAMI).

news

Jan 20, 2023 8 papers accepted to ICLR 2023
Sep 14, 2022 7 papers accepted to NeurIPS 2022
Jul 17, 2022 Outstanding Paper Award at ICML 2022

selected publications

  1. Bayesian Model Selection, the Marginal Likelihood, and Generalization
    Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML) Outstanding Paper Award, 2022
  2. Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
    Micah Goldblum, Liam Fowl, and Tom Goldstein
    Advances in Neural Information Processing Systems (NeurIPS), 2020
  3. Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
    Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, and Tom Goldstein
    International Conference on Machine Learning (ICML), 2020
  4. Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory
    Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, and Tom Goldstein
    International Conference on Learning Representations (ICLR), 2020
  5. Adversarially Robust Distillation
    Micah Goldblum, Liam Fowl, Soheil Feizi, and Tom Goldstein
    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020
  6. Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
    Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, and Tom Goldstein
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  7. Adversarial Examples Make Strong Poisons
    Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojtek Czaja, and Tom Goldstein
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  8. Data Augmentation for Meta-Learning
    Renkun Ni, Micah Goldblum, Amr Sharaf, Kezhi Kong, and Tom Goldstein
    International Conference on Machine Learning (ICML), 2021
  9. Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
    Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P Dickerson, and Tom Goldstein
    International Conference on Machine Learning (ICML), 2021
  10. LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
    Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John P Dickerson, Gavin Taylor, and Tom Goldstein
    International Conference on Learning Representations (ICLR), 2021
  11. The Intrinsic Dimension of Images and Its Impact on Learning
    Phillip Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, and Tom Goldstein
    International Conference on Learning Representations (ICLR), 2021