Micah Goldblum
Math and ML. Trying to understand neural networks better.
micah.g[at]columbia[dot]edu
Google Scholar🚨 Note: I am recruiting PhD students, primarily for programs in electrical engineering and computer science. Reach out if you’re interested. 🚨
I am an assistant professor at Columbia University. My research focuses on both applied and fundamental problems in machine learning:
-
AI safety - Modern AI systems contain biases, security vulnerabilities, expose users to privacy breaches, and exhibit catastrophic failures of reasoning and generalization. My work aims to detect and close these gaps.
-
Mathematical and computational tools for understanding and improving neural networks - Despite rapid advances in capabilities, our understanding of why neural networks work is highly limited. My research focuses on the structures in neural networks and their training procedures that enable them to generalize in practice.
-
Deep learning for data science and tabular data - Vast communities of AI researchers study language and vision applications, yet most industrial and scientific data is tabular, and relatively few researchers study deep learning for tabular data. My research aims to build useful deep learning tools for data science.
My portfolio includes work in Bayesian inference, generalization theory, algorithmic reasoning, and AI security and privacy. Our recent paper on model comparison received the Outstanding Paper Award at ICML 2022. Before my current position, I was a postdoctoral research fellow at New York University working with Yann LeCun and Andrew Gordon Wilson. I received a Ph.D. in mathematics at the University of Maryland where I worked with Tom Goldstein and Wojciech Czaja.
news
Sep 26, 2024 | 4 papers accepted to NeurIPS 2024 |
---|---|
May 01, 2024 | 4 papers accepted to ICML 2024 |
Jan 16, 2024 | 3 papers accepted to ICLR 2024 |
Sep 21, 2023 | 9 papers accepted to NeurIPS 2023 |
Jan 20, 2023 | 8 papers accepted to ICLR 2023 |