Large language models (LLMs) have made steady progress in formal mathematics, achieving near–International Mathematical Olympiad (IMO) performance. This talk presents two complementary advances toward more capable and interpretable formal proving systems. First, we introduce LeanDojo, a foundational open-source toolkit bridging ML and Lean, enabling large-scale data extraction, interactive training, and the development of ReProver, a retrieval-augmented Lean prover. Next, we turn to a critical challenge: proofs produced by LLMs are often unnecessarily long, redundant, and opaque. To mitigate this, we introduce ProofOptimizer, a system that automatically simplifies Lean proofs while preserving correctness. It combines symbolic linting, a fine-tuned 7B model, and iterative refinement, reducing proof length by up to 87% on MiniF2F and 57% on PutnamBench, even halving some IMO-level proofs. Together, these systems demonstrate how AI can make automated proofs not only possible, but also increasingly comprehensible.