Large language models and their descendants are rapidly approaching human proficiency on a wide range of knowledge work tasks, and in many cases have already exceeded it. I will review some recent progress on the field and examine their implications for mathematical research. In particular, I will make the case for the following claims: (1) mathematical research will soon become predominantly AI-assisted, (2) frameworks for formalized mathematics like Lean will play a critical role in making mathematical research legible to both humans and AI reasoning systems at scale, and (3) the future of mathematics has already begun to arrive, but is not evenly distributed.

Jesse Michael Han is the CEO and founder of Morph Labs, which is developing infrastructure for robust and scalable AI code generation. He was previously a senior research scientist at OpenAI where he contributed to GPT-4 and automated theorem proving for mathematical Olympiad problems. He received his PhD from the University of Pittsburgh under the supervision of Tom Hales in 2023 on the applications of large language models to neural theorem proving.