This time last year, I came up with an idea. I wanted to teach my computer how to do Complex Analysis. I wanted to be able to plug in an equation much akin to $|z| = 3$ or something slightly more complicated and then my computer to output the something relating to that function. The first task was to get it to graph it. However, I spent many a day fiddling with graphing software to get it to plot this moduli graph. It just did not work, it flat out refused. I knew a way to get it to graph, but it meant me doing some algebra, and I wanted it to be fully automated. So, I went to a Software Cornwall event and asked there, the answers I got were exciting but for me, unimplementable. It would mean spending many months learning how to solve a genetic algorithm. Plus with the tech I had at the time, a Raspberry Pi, it was going to be an uphill struggle.
A genetic algorithm is a piece of code that is very similar to Darwin’s Theory of evolution. The computer makes several copies of the software all with slight differences, tests which one is the best and kills off the rest.
My piece of software would input a load of bits of algebra in the system, and the one that got closest to the answer would be the one that got mutated again. This would carry on until the software could understand and solve the type of algebra that I desired.
It was only when I met and chatted with a PhD student, who was doing something vaguely similar to what I wanted to do, that I learnt about algebra engines.
An algebra engine or CAS is a system that basically does algebra for you, much in the way that humans do. It provides an answer or a simplification. As a final test for my CAS, I wanted one that could solve Diophantine equations when I put them in
I had known that things like wolfram alpha existed and they were like what I wanted, but I thought my system would be more straightforward and I couldn’t take anything of use from it. This is where the project almost tailed off as my coding skills weren’t to the point of being able to code wolfram alpha. I did play around in python and created some functions that took algebra and simplified it. I had to put in the commands and make sure they were in the right order. I knew that this was a step forward and I could build off this, but time and people got in the way. Plus, by the time I picked this up again, I would have a solution.
So, we now fast forward to Summer of 2020. Time has fallen to pieces, and so has the world due to SARS-CoV-2. I have all the time in the world, and I see something on Twitter. It’s a tweet about LEAN, something me and my friend joked about earlier in the year and it finally hits me. It was what I was trying to code in python! This is it! I can use this to computerise complex analysis!
That’s what I am doing, I am learning LEAN to code up complex analysis. I am getting heavily distracted along the way, but I am doing it. Its also a ridiculous excuse to lean some abstract algebras and little bits of ‘useless’ Maths, like monoids and sub-monoids! Along the way, I may be pitching (or at least trying to pitch in) on some other projects such as the IMO grand challenge, which is a project to create a bot to get a gold in an IMO paper.
Unfortunately, I have yet to have anything to show apart from a simple example. Let us go through it so you can see what I am going to be doing and what lean looks like:
example : ∀ (a : ℝ), ∃ (b : ℝ), a + b = 0 := begin intro a, use (-a), rw add_comm, rw neg_add_self, end
That probably looks very confusing to anybody that hasn’t seen this before. To begin, we explain that the coding environment we have looks like this:
We can see our example highlighted with a load more other stuff I have formalised from exercises in Linear Algebra books. However, simple these things look they must be formalised correctly in lean for them to be understood by the engine. One important thing to realise about lean is that something like $a + b = b + a$ is simple for a human to understand. Still, it’s something a computer just doesn’t know.
On the right, you see a panel called the lean goal. When we put the outline of our example
example : ∀ (a : ℝ), ∃ (b : ℝ), a + b = 0 := begin end
Note that our goal is the ‘
∀ (a : ℝ), ∃ (b : ℝ), a + b = 0 ’ bit, meaning using conventional mathematical notation, for all $a$ there exists a $b$ such that $a + b = 0$. Again, this is trivial, the proof is letting $b \,=\, – a$ but a computer doesn’t know this. The example and begin and end are just what you need to say, I’m going to prove a statement, and this is how I am going to prove it.
Before we start writing commands, we can talk about where they come from. There is a big overarching thing that we call mathlib, it is the significant repository of all the commands which is open source. Anybody can add new commands or as we call them, theorems. We can just use
rw with a theorem or even an
apply. There is a big difference between the two; if you have direct equality, you can use a
rw, but if you have an implication, you must use
apply. If you have an exact copy of a theorem in mathlib, you can just use
exact to finish the proof.
At the start of the proof, our goal panel looks like
1 goal ⊢ ∀ (a : ℝ), ∃ (b : ℝ), a + b = 0
For us to get to the juicier bit of the goal, we need to get rid of the $\forall$ and the $\exists$. With experience, you learn that two goals will get rid of each of them, respectively. Firstly
intros a, this will get rid of $\forall ( a : ℝ )$ and add it to the local context ( the goal panel ). So now our goal panel looks like:
1 goal a : ℝ ⊢ ∃ (b : ℝ), a + b = 0
Producing the required $a : ℝ$ in the local context and now only leaving us with $∃ (b : ℝ)$, to get rid of this we need to tell lean what $b$ should look like using things we have in the local context. This time we know we are heading for $a + b = 0$, so we let $b = – a$. Hence our next move is to use
use(-a). The lean goal is now showing:
1 goal a : ℝ ⊢ a + -a = 0
This time no change to the local context, but a change to the goal. Now we come stuck, we have a number added to its additive inverse, and we must prove that it’s zero. Lean knows that the reals are a field,
A field is a mathematical structure, or a box, such that all the elements in the boxes conform to a certain set of rules. Simply put all of the four foundational operations behave nicely and are closed, i.e. if $a$ and $b$ are in the field so is $a \,+\, b$, $a \,\times\, b$, $a \,\div\, b$ and $a \,-\, b$ and none of them shoot off to infinity (we define $a \,\div\, 0 = 0$). Look here for more information
So we can use a tactic called
rw with the
neg_add_self theorem, and the goal should disappear if the $a$ and $-a$ weren’t in the wrong order. The theorem says that $-a + a = 0$ and we have the $a + -a = 0$, so we can use
rw add_comm before our
rw neg_add_self to swap the a’s and produce the
goal accomplished we want.
So, we completed your first lean proof! Now is time to go off and prove more stuff! I am part of a summer project with Imperial College London / Xena Project. If you are interested, come and join us. You can find out more about what we do here. Also, go and play the Natural Numbers Game or even have a go at a couple of similar exercises I created here