In the last post I have presented some historical context about programming and mathematical methodology. If you read it, then you should have an idea when and why programmers started to investigate on mathematical methodology. However, I haven’t mentioned any aspects of mathematical methodology that can help us to improve our programming or mathematical skills.

In this post, I’ll talk about mathematical proofs. And what’s the relevance of this topic to programmers? Well, computer programs are mathematical formulae, with a precise formal meaning and embodying constructive theorems about the systems they implement (as well-written in “Mathematics and Programming – A Revolution in the Art of Effective Reasoning”, by Roland Backhouse). The difference between theorems embodied by computer programs and the ones usually studied in mathematics is that they are applied by an unforgiving machine, with the effect that the smallest error can cause a huge damage. This means that computer programmers must create trustworth designs, i.e., the constructive theorems embodied by their programs must be programmed correctly.

## Mathematical Proofs

Mathematicians job is to do mathematics, i.e., to design and present theorems, arguments, algorithms and in some cases whole theories. However, the traditional mathematical curriculum is more concerned with teaching mathematical facts — existing theories and concepts — than with the doing of mathematics. And even when design and presentation get some attention, they are treated separately: design of solutions is viewed as a psychological issue, while presentation is viewed as a matter of personal style (words from this Dijkstra’s note on Mathematical Methodology).

A proof of a theorem should demonstrate, using certain facts (also known as axioms) or previously proved theorems, why it is true. Additionally, a good proof should explain clearly how the facts are combined and it should express the design considerations so that readers can understand it better, explain it to others and prove other theorems in a similar fashion. Now, look at the following conventional proof of $A{\cup}(B{\cap}C) = (A{\cup}B){\cap}(A{\cup}C)$, which is similar to the majority of proofs we can find in math textbooks (the proof is actually from a math textbook, but it was extracted from [GS95]).

We first show that $A{\cup}(B{\cap}C){\subseteq}(A{\cup}B){\cap}(A{\cup}C)$. If $x{\in}(A{\cup}(B{\cap}C))$ , then either $x{\in}A$ or $x{\in}(B{\cap}C)$. If $x{\in}A$, then certainly $x{\in}(A{\cup}B)$ and $x{\in}(A{\cup}C)$, so $x{\in}((A{\cup}B){\cap}(A{\cup}C))$. On the other hand, if $x{\in}(B{\cap}C)$, then $x{\in}B$ and $x{\in}C$, so $x{\in}(A{\cup}B)$ and $x{\in}(A{\cup}C)$, so $x{\in}((A{\cup}B){\cap}(A{\cup}C))$. Hence, $A{\cup}(B{\cap}C) {\subseteq} (A{\cup}B){\cap}(A{\cup}C)$.

Conversely, if $y{\in}((A{\cup}B){\cap}(A{\cup}C))$, then $y{\in}(A{\cup}B)$ and $y{\in}(A{\cup}C)$. We consider two cases: $y{\in}A$ and $y{\notin}A$. If $y{\in}A$, then $y{\in}(A{\cup}(B{\cap}C))$, and this part is done. If $y{\notin}A$, then, since $y{\in}(A{\cup}B)$ we must have $y{\in}B$. Similarly, since $y{\in}(A{\cup}C)$ and $y{\notin}A$, we have $y{\in}C$. Thus, $y{\in}(B{\cap}C)$, and this implies $y{\in}(A{\cup}(B{\cap}C))$. Hence $(A{\cup}B){\cap}(A{\cup}C) {\subseteq} A{\cup}(B{\cap}C)$. The theorem follows.

Proofs are usually like this one: written using natural language, which, by nature, admits ambiguity. These proofs, also called informal proofs, place a large burden on the reader since it is difficult to see precisely how facts interact. In order to achieve the crispness we aspire to, we must deviate from informal proofs. That is why most of our mathematical reasoning is carried out in what is known as calculational method. This method reduces proof obligations to targets to be reached by formula manipulation. Although there are some criticisms and objections to the method, the experience shows that arguments become more clear and systematic, since we are (usually) restricting ourselves to a very modest repertoire and to simple syntactic manipulations. Besides, with the development of the method, some heuristics have been created, allowing the writer to prefer certain decisions (almost according with the principle “There is really only one thing one can do.”).

## Proof Format

Note that calculational arguments usually start with a boolean expression that, by value-preserving transformations, is massaged according to our needs (we can evaluate it to true or false, or we can transform it into an equivalent boolean expression). So, if we want to evaluate a boolean expression A to true and it takes 3 steps, then, for general expressions B and C, we would have:

the first step would establish A is B ;

the second step would establish B is C;

the third step would establish C is true .

This means that we would have the intermediate expressions B and C repeated twice. In general, B and C can be long expressions, so we need a proof format that allows us to omit intermediate expressions. We use Wim Feijen’s proof format (described in detail in EWD999: Our Proof Format, Predicate Calculus and Program Semantics and in Chapter 3 of Program Construction: Calculating Implementations from Specifications), which for this small example would render:

\[

\beginproof

\pexp{A}

\hint{=}{hint why $A{\equiv}B$}

\pexp{B}

\hint{=}{hint why $B{\equiv}C$}

\pexp{C}

\hint{=}{hint why $C{\equiv}true$}

\pexp{true ~~.}

\endproof

\]

The advantages of this format are more than just brevity. It allows us to conclude immediately that A is true without reading the intermediate expressions and we also can use any transitive relation between the steps. Also, the use of a systematic proof format allows us to compare two different proofs of the same theorem more effectively. Now, look how the proof presented above in an informal style can be rewritten in a calculational style:

Below, we prove $v{\in}(A{\cup}(B{\cap}C)) {\equiv} v{\in}((A{\cup}B){\cap}(A{\cup}C))$. By Extensionality (the definition of equality of sets), we then conclude $A{\cup}(B{\cap}C) {=} (A{\cup}B){\cap}(A{\cup}C)$.

\[

\beginproof

\pexp{v{\in}(A{\cup}(B{\cap}C))}

\hint{=}{Definition of $\cup$}

\pexp{v{\in}A {~\vee~} v{\in}(B{\cap}C)}

\hint{=}{Definition of $\cap$}

\pexp{v{\in}A {~\vee~} (v{\in}B \wedge v{\in}C)}

\hint{=}{Distributivity of $\vee$ over $\wedge$}

\pexp{(v{\in}A {\,\vee\,} v{\in}B) \wedge (v{\in}A {\,\vee\,} v{\in}C)}

\hint{=}{Definition of $\cup$, twice}

\pexp{(v{\in}(A {\cup} B)) \wedge (v{\in} (A {\cup} C))}

\hint{=}{Definition of $\cap$}

\pexp{v{\in}((A{\cup}B){\cap}(A{\cup}C)) ~~.}

\endproof

\]

In this second version, the proof format makes all the ingredients much more evident; every step is well identified and we can see it uses a common strategy: it eliminates the union and disjunction operators using their definition, performs some manipulation and reintroduces the operators. Also, this second argument eliminates the “if” and “only if” argument.

## Final notes

To conclude I’d just like to quote a paragraph from Roland’s Backhouse article I’ve mentioned above:

“The vast complexity of computer software makes its unreliability understandable, but not excusable. The implementation of reliable computer systems, carrying a guarantee of fitness-for-purpose, imposes major intellectual challenges that can only be met by a science of computing whose hallmark is the avoidance of error.”

Delightful reading you have given us João, congratilations for it.

The good thing about constructive proofs is that you never loose

yourself, the next step is always (or at least most of the times) very clear and that can be seen on your example!

Let’s see if you can show us more interesting proofs then only the the distribution of the union operand

Also let me ask you a question, you said that you always start with a boolean formula. So when you want to prove an algorithm what you do is find it’s invariant and try to prove it?

How do you intend to address the non-deterministic issues faced by programmers such as file and network I/O?

I ask because most of the hard problems I see involve handling the cases where functions do not work as expected: a file is locked, the data is corrupted, the server is not accessible.

Seems to me that trying to write proofs applications is like trying to proof 2A+2B = 2(A+B) when A+A only equals 2A 98% of the time.

I kind of feel that you’re comparing two quite different proofs of the statement, rather than two ways of writing the same proof. Also, this is pretty close to the worst example to use natural language for — I agree that sometimes a more calculational style works better.

However, there are many proofs where the symbolic computational style does a horrible job, by not allowing trivial details to be hidden. In mathematics, proofs are written for humans, and checked by humans. It can be tedious to read proof details which are immediately obvious to you. Hence, proofs are written with an audience in mind. Steps are described with details (that should be obvious) skipped. One of my professors put it somewhat humourously by saying that what we’re aiming for is not so much a rigourous proof, as a rigourisable proof: if anyone in the room can with little effort translate the proof into one that a machine can understand, then it’s good. Of course, this requires knowledge of the people in the room. Books for more general audiences have to include more and more of the “trivial” details.

Consider the following proof of the Lebesgue number lemma, which was the one given in my Algebraic Topology course, directly quoted from the course notes.

—-

Lebesgue number lemma: If (X,d) is a compact metric space, and V is an open cover of X, then there exists λ > 0 such that for any subset A of diameter less than λ, there exists U in V containing A as a subset. (The number λ is called a Lebesgue number for the cover.)

Proof: For a contradiction, suppose that we could find a sequence of sets A_n of diameters less than 1/n, none of which was inside any member of the cover. Choose a point x_n in A_n for each n. This sequence of points has a convergent subsequence, since we’re inside a compact metric space. (Compact implies complete and bounded.) Let its limit lie in U, an element of V. Choose ε > 0 such that the ε-ball centred at the limit is inside U. Then choose n so that x_n is in the subsequence and within ε/2 of the limit, and so that 1/n

Hmm, it appears as if my message has gotten cut off.

I’ll try again, and reconstruct the end of the message.

—-

Lebesgue number lemma: If (X,d) is a compact metric space, and V is an open cover of X, then there exists λ > 0 such that for any subset A of diameter less than λ, there exists U in V containing A as a subset. (The number λ is called a Lebesgue number for the cover.)

Proof: For a contradiction, suppose that we could find a sequence of sets A_n of diameters less than 1/n, none of which was inside any member of the cover. Choose a point x_n in A_n for each n. This sequence of points has a convergent subsequence, since we’re inside a compact metric space. (Compact implies complete and bounded.) Let its limit lie in U, an element of V. Choose ε > 0 such that the ε-ball centred at the limit is inside U. Then choose n so that x_n is in the subsequence and within ε/2 of the limit, and so that 1/n

Ah, I see the problem! Your system destroys everything after a less-than sign. (Presumably in some heavy-handed attempt to strip HTML.) There’s a touch of irony in trying to talk about proofs of correctness of software and being repeatedly bitten by bugs in the software being used to communicate!

—-

… 1/n is less than ε/2. This leads to a contradiction in the way that you’ve seen uncountably often!

—-

This was obviously not written for today’s proof checking machines! However, if you asked anyone in the class at the time, they would have little trouble constructing such a machine-checkable proof. (Of course, after being introduced to whatever formal proof checking software you’d care to use.)

So the real issue is that proofs are a form of communication. Like any form of communication, one needs to consider the audience. If your audience consists of a room full of people who have seen this sort of proof many times before, you’ll be more able to leave out uninteresting details. Leaving them in will just cause those people to fall asleep.

However, if we’re talking about proofs of software correctness, indeed many such proofs are tedious and even boring almost no matter how you slice them. Quite often they break down to structural induction with lots of cases that need to be checked. For this sort of proof, a more mechanical style can be beneficial. Also, due to the fact that they are lengthy and boring, one often doesn’t want to entrust humans with checking them, both for the humans’ sake and the sake of the proof. So in these cases, we want to have software which checks the proofs for us, and are constrained to a mechanical level of detail. We also might want to help with that by having the machine construct much of the proof for us, as is done in type systems with type checking and inference. The interesting problem then becomes the checking of that software’s correctness.

I just thought that a few of the remarks about proofs came off as overly broad. The real issue involved to write good proofs is just to always keep the audience in mind.

Hi Joao, seems that your feed is not working. Please check that out.

Hi. Thank you all for your comments.

Isaac: there was a problem between feedburner and the server where the blog is hosted. Everything should be fixed now.

Paulo: calculational arguments *usually* start with a boolean statement and, by manipulation, try to derive something useful. Finding and proving invariants is very important, and to prove the invariants you can use calculational arguments!

Cale: Thank you for your great comment! Just a remark about HTML stripping: it’s not that ironic, since the lessthan stripping is in the specification Anyway, I’ll see if I can fix it.

First, the proofs *are* different. And I consider the second better for teaching the theorem. You talk about “symbolic computational style”; what’s that? The calculational method was invented for humans, not for machines! It was invented by programmers, because programming has a scale problem: you really need to avoid unnecessary complications and detail. Therefore, I don’t understand what you say about too much detail (the idea is to avoid it without committing the sin of omission).

Also, remember that “our” proofs have an audience in mind: people who want to learn how to do mathematics. By making clear the design decisions, steps become motivated and proofs “easier” to learn and/or teach. That’s why we seek avoidance of unnecessary detail, unnecessary naming, unnecessary case analysis, etc. We want to be explicit about the arguments and about the considerations that went into the arguments’ design.

Regarding your example, I don’t know (almost) anything about metric spaces. But I’ll try to think about the lemma and see if I can figure out a way of proving it differently.

By the way, you gave me an idea for a future post about “reductio ad absurdum” (related with Euclid’s proof that there are infinitely many primes).

Well, I’ll try to be more explicit about what I mean regarding “calculation-style”. It’s not an entirely cut-and-dried concept, and there are certainly lots of things in between prose and calculation.

Basically, the idea I’m trying to express with “calculation”, involves translating all of the ideas present into formal symbols, and then manipulating those symbols via rules until a string of symbols is reached which can be interpreted as the conclusion.

One might very well argue that all of mathematics is like this, and at a certain level, this is true. Symbolic formalism is the skeleton which holds mathematics together, and everything is held up to that standard at the end of the day.

However, in actually doing mathematics, manipulating formal strings of symbols is far less frequent than one would expect, given that those strings of symbols are at some level the things we’re actually working with. There’s a great deal of intuition involved in the construction of proofs. This is necessary, because there are far too many potential ways to push symbols around, and without that intuition, there’s almost no way to tell which might be appropriate. One might stumble around in the dark for days without ever finding a solution. While in the case of your proof, the symbols for “intersection” and “union”, “and” and “or” have been carefully designed to make the necessary manipulations somewhat obvious (and the theorem is pretty simple to begin with), in general, the way to proceed often isn’t so clear at the formal level.

Supposing that we were having trouble coming up with the formal proof, another approach could be taken to your particular example, which would simply be to draw a Venn diagram consisting of 3 mutually overlapping circles, representing the sets A, B, and C. We’d then just determine which of the 7 regions are in each of the two sets, and come to the conclusion that they’re the same. This work could then be turned into a formal proof. (Though perhaps not as nice as your second one, but possibly a bit clearer than the first.)

Set theory doesn’t provide the clearest examples of the experience that I’m describing though, as much of it really is about that formalism, and a great deal of it is very difficult to picture.

However, in many branches of mathematics, analysis, topology, combinatorics, geometry, and even large portions of algebra, the informal view of the objects being considered, equals, or even dominates the formal view in importance. Whenever a new object is defined, there are very often certain propositions which if they failed, we would probably decide that there was a mistake in our definition. We couldn’t do that if there wasn’t some fuzzy notion that the formalism was attempting to clarify.

When we define things in mathematics, the real extents of what will grow from whatever axiomatic seeds we plant is often unclear. We start from some fuzzy notion, plant some seeds in the middle of it, and see what happens. Quite often the resulting formalism outgrows the original fuzzy idea, and similarly, quite often some of the original idea is left out. Sometimes when the original idea is outgrown, people get rather upset about things — take some people’s reaction to the geometric implications of the axiom of choice for instance.

Another reason the fuzzy ideas behind the formalism are important is the problems with deciding the consistency of mathematical formalisms. If our current set theory were to be found inconsistent, we might lose a lot of work, but I’d be willing to bet that we could find another set theory not too far away from it which doesn’t have the same problem, and which would preserve much of the work done in the rest of mathematics.

Anyway, this is almost getting off-topic. My real point is that often the fuzzy ideas which lie behind the formalism can be just as, if not more important than the formalism itself. Writing proofs as prose in English (or whatever other natural language) can often hide some of that formalism, but very often does a better job of getting across the shapes of the “fuzzy notions” involved. Such proofs are quite likely easier to translate into other similar formalisms, and often are easier to remember and reproduce. Of course, there are exceptions — many things really are better expressed by manipulating strings, but for me at least, this quite often isn’t the case.

awts!!! very nice article.. wish that part 3 will come

Thanks Ronel. I am not planning to write a part III. Instead, I’ll try to convey other ideas using particular examples.

Pingback: algebraic combinatorics

Pingback: Alabama weddings - Wedding chapel - Alabama wedding chapel

Pingback: Web site of the state - York state - State of arizona official web site

Pingback: Face value nascar ticket - Ticket buyer - Nascar ticket