Micah Goldblum

Math and ML. Trying to understand neural networks better.

prof_pic.jpg

micah.g[at]columbia[dot]edu

Google Scholar

🚨 Note: I am recruiting PhD students, primarily for programs in electrical engineering and computer science. Reach out if you’re interested. 🚨

I am an assistant professor at Columbia University. My research focuses on both applied and fundamental problems in machine learning:

  1. AI safety - Modern AI systems contain biases, security vulnerabilities, expose users to privacy breaches, and exhibit catastrophic failures of reasoning and generalization. My work aims to detect and close these gaps.

  2. Mathematical and computational tools for understanding and improving neural networks - Despite rapid advances in capabilities, our understanding of why neural networks work is highly limited. My research focuses on the structures in neural networks and their training procedures that enable them to generalize in practice.

  3. Deep learning for data science and tabular data - Vast communities of AI researchers study language and vision applications, yet most industrial and scientific data is tabular, and relatively few researchers study deep learning for tabular data. My research aims to build useful deep learning tools for data science.

My portfolio includes work in Bayesian inference, generalization theory, algorithmic reasoning, and AI security and privacy. Our recent paper on model comparison received the Outstanding Paper Award at ICML 2022. Before my current position, I was a postdoctoral research fellow at New York University working with Yann LeCun and Andrew Gordon Wilson. I received a Ph.D. in mathematics at the University of Maryland where I worked with Tom Goldstein and Wojciech Czaja.

news

Sep 26, 2024 4 papers accepted to NeurIPS 2024
May 01, 2024 4 papers accepted to ICML 2024
Jan 16, 2024 3 papers accepted to ICLR 2024
Sep 21, 2023 9 papers accepted to NeurIPS 2023
Jan 20, 2023 8 papers accepted to ICLR 2023

selected publications

  1. nofreelunch.jpeg
    The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
    Micah Goldblum ,  Marc Finzi ,  Keefer Rowan ,  and  Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2024
  2. binoculars.png
    Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text
    Abhimanyu Hans ,  Avi Schwarzschild ,  Valeriia Cherepanova ,  Hamid Kazemi ,  Aniruddha Saha ,  Micah Goldblum ,  Jonas Geiping ,  and  Tom Goldstein
    International Conference on Machine Learning (ICML), 2024
  3. backbone.jpeg
    Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks
    Micah Goldblum ,  Hossein Souri ,  Renkun Ni ,  Manli Shu ,  Viraj Uday Prabhu ,  Gowthami Somepalli ,  Prithvijit Chattopadhyay ,  Adrien Bardes ,  Mark Ibrahim ,  Judy Hoffman ,  Rama Chellappa ,  Andrew Gordon Wilson ,  and  Tom Goldstein
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  4. fairfacerec.jpeg
    Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition
    Samuel Dooley ,  Rhea Sukthanker ,  John P Dickerson ,  Colin White ,  Frank Hutter ,  and  Micah Goldblum
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  5. marginal.png
    Bayesian Model Selection, the Marginal Likelihood, and Generalization
    Sanae Lotfi ,  Pavel Izmailov ,  Gregory Benton ,  Micah Goldblum ,  and  Andrew Gordon Wilson
    International Conference on Machine Learning (ICML) Outstanding Paper Award, 2022
  6. colddiffusion.png
    Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
    Arpit Bansal ,  Eitan Borgnia ,  Hong-Min Chu ,  Jie S Li ,  Hamid Kazemi ,  Furong Huang ,  Micah Goldblum ,  Jonas Geiping ,  and  Tom Goldstein
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  7. meta.png
    Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
    Micah Goldblum ,  Liam Fowl ,  and  Tom Goldstein
    Advances in Neural Information Processing Systems (NeurIPS), 2020
  8. variance_2.png
    Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
    Micah Goldblum ,  Steven Reich ,  Liam Fowl ,  Renkun Ni ,  Valeriia Cherepanova ,  and  Tom Goldstein
    International Conference on Machine Learning (ICML), 2020
  9. propaganda.png
    Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory
    Micah Goldblum ,  Jonas Geiping ,  Avi Schwarzschild ,  Michael Moeller ,  and  Tom Goldstein
    International Conference on Learning Representations (ICLR), 2020
  10. distilled.png
    Adversarially Robust Distillation
    Micah Goldblum ,  Liam Fowl ,  Soheil Feizi ,  and  Tom Goldstein
    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020
  11. deepthink1.png
    Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
    Avi Schwarzschild ,  Eitan Borgnia ,  Arjun Gupta ,  Furong Huang ,  Uzi Vishkin ,  Micah Goldblum ,  and  Tom Goldstein
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  12. lowkey.png
    LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
    Valeriia Cherepanova ,  Micah Goldblum ,  Harrison Foley ,  Shiyuan Duan ,  John P Dickerson ,  Gavin Taylor ,  and  Tom Goldstein
    International Conference on Learning Representations (ICLR), 2021
  13. manifold.png
    The Intrinsic Dimension of Images and Its Impact on Learning
    Phillip Pope ,  Chen Zhu ,  Ahmed Abdelkader ,  Micah Goldblum ,  and  Tom Goldstein
    International Conference on Learning Representations (ICLR), 2021