We Should Formalise Physics
Why the PhysLEAN project is important
LEAN is a programming language. But it’s main appeal is that it helps you prove things in math.
In Lean, you write programs that consists of ‘theorems’. Theorems are statements that you think are true, and are written down in code. To get Lean to compile, you need to prove each theorem with something called tactics, which make use of prior theorems that you have already proved.
Lean fails on logically wrong lines of code. So, given some starting axioms, everything one writes down in a library that works must be logically true.
This is incredibly powerful. Lean has been used to formalise mathematics in the Mathlib project. Mathlib aims to verify and record some of humanity’s greatest theorems. Lean has also hit the news because Google Deepmind used it to train and verify their LLMs to solve problems in the International Math Olympiad.
Should we do the same with physics? Should we seek to formalise all of our known physical laws in Lean to check correctness?
Formalising physics into a codebase means that physicists will only have to prove things once. People who want to test new physical laws can check for mathematical correctness instantly, and without peer review.
It can also be used to help mark physics students’ work and teach people rigour, which indirectly accelerates the field.
If we reach a point generative AI can pose new physical theories or experiments, it’s essential to have some way of making sure the math is right. Right now, we have check math manually. But if generative AI creates thousands of theories a second, there will have to be a way to check things automatically.
This is why the the PhysLean project, founded by Joseph Tooby-Smith (who has been an extremely kind and patient collaborator), is important, and you should check it out.
As a caveat though, I’m a little torn about the philosophy of working on formalisation in general.
Physics is about writing rules that match reality, even if the rules aren’t mathematically rigorous. As long as it works.. The Feynman path integral is a perfect example of this - the method works, but hasn’t yet been rigorously defined in the field of mathematical analysis.
What this philosophy says about the act of doing physics formalisation is unclear to me, and I hope to write more on this in the future.

There was an interesting book (Walter Reitman, Cognition and Thought: An Information Processing Approach, Wiley, 1965) that proposed formalizing models in psychology. Programs must be precisely specified at the very least. This then became a popular notion of discussion at MIT. This "neat" approach has somewhat lost momentum to modern "scruffy" models at the moment (Marvin Minsky, Neat versus scruffy, Artificial intelligence at MIT, MIT Press, 1990). Voevodsky also led an early programming formalization effort as you recall, one of several.
"Physics is about writing rules that match reality, even if the rules aren’t mathematically rigorous. As long as it works. The Feynman path integral is a perfect example of this. What this philosophy says about the act of doing physics formalisation is unclear to me, and I hope to write more on this in the future."
There are definitely some challenges in this respect. They are well worth thinking about for sure. First of all, while interpretation doesn't change a formal model, it does change open-ended conjectures that are tentatively "suggested" by a model. AI being a "scruffy system" might actually be quite good at exploring this.
Wheeler's quantum foam proposal is another example. It has many competing formalizations and still no "obviously correct" formal framework. Empirically it is quite what this means. Penrose's paper on spin networks is another one. The original version only models some particles, and a broader formalization has several options. Wheeler's earlier idea, the scattering matrix (and then the bootstrap, etc), therefore was more influential around the same time.
If one of many possible formalizations of a proposed rule is found invalid over a specific system of axioms, the question what does this mean is currently somewhat undertheorized.
I think there is value in work on foundations of physics, including formalization. But it seems that progress in physics does not come from this but from intuitive, ad-hoc approaches that nevertheless manage to explain and generalize.