Skip to main content
PRL Project

Logical Investigations, with the Nuprl Proof Assistant
Table of Contents

Previous Page Next Page

2. The Minimal Implicational Calculus

The propositional calculus, PC, is the normal starting point for the study of logic. It provides the meaning and proof rules for propositions whose structure can be analyzed using the standard logical connectives such as and (), or (), implies (), and not (). The logical operation that is constructively most interesting is implication, and so we will start with a subsystem called the minimal implicational calculus. This calculus has precisely one propositional constant , and exactly one logical operator for implication, and an unbounded list of propositional variables whose official representation is var{char} where char is a character string. For example, a variable could be var{X}, var{Y}, var{Unknown}, and so forth. Normally we will write these more simply as A, B, C, P, Q, R, X, Y, etc. We sometimes denote this simple calculus as PC[⊥,⇒].

The Implication Operator

The idea of implication is something we understand as part of the naive informal logic we use in the everyday world. We don't often use the term "implication" in informal discussion. Instead of saying A B is an implication or that "proposition A implies proposition B," we say "A implies B" or we might say "if A then B." Usually when we say that A implies B or B follows from A, we have some reason that we think we know this. If someone says "if it's snowing the temperature must be below freezing," they most likely know a reason for this, perhaps a bit of physics or something they have always experienced. So some implications are the result of piecing together bits of knowledge that constitute evidence for why we believe the proposition. As we are learning basic arithmetic, we tend to memorize some atomic justifications for simple facts, e.g. -5 × -5 is 25 remembering that "a minus times a minus is a plus" or some such rule.

What is the reason that we know A A in general and ⊥ ⇒ ⊥ in particular? To answer this we need to know what implication means. What does A B mean in general? To know A B, we need to reason from the assumption of A to the conclusion B, or from the antecedent to the consequent. Thinking in terms of evidence we see intuitively and naively that we know the implication holds if we have a method of transforming evidence for the assumption A into evidence for the conclusion B. So in general, we know A B at least when we have a method that converts hypothetical evidence x for A into evidence b(x) for B. We actually need to know the method and be able to apply it. So let's give that transformation a name. We call it an operation, f, that transforms x into f (x) in such a way that we know f (x) is evidence for B as long as we are given for x real evidence, say a for A if there is any.

We can see now why ⊥ ⇒ ⊥ is a proposition we know. We reason that no matter what specific proposition is, if we are given hypothetical evidence x for the assumption, then the transformation that produces evidence for the conclusion is the identity transformation, f (x) = x. We have exactly the evidence needed to know the implication. We also use the functional notation from programming languages for this operation, λ(x.x).

Note that in some presentations of propositions there is a convention that the operator associates to the right. Nuprl follows this convention. Thus, A B C means A (B C).

Inference in the Realm of the Unknown

Mathematics and theoretical computer science are faced with many compelling open problems, some are extremely famous such as the Riemann Hypothesis in mathematics and the P = NP question in computer science — both of these are Millennium Problems that have a financial reward for their solution. Investigations of an open problem, say OP, have the characteristic that we try to relate it to other problems and thus investigate propositions of the form OP Q or Q OP. In these investigations we do not have evidence for OP, nevertheless we understand what the evidence for OP would be like and assume that we have some hypothetical evidence and draw conclusions from it.

Therefore it is common in many areas of science that we make inferences from hypotheses when we have no evidence for them. Twentieth century logic has taught us that there will always be propositions of compelling interest to us for which we have no evidence and classes of problems which we know are unsolvable and thus an unbounded realm of the unknown.

Example: Goldbach Conjectures

Let us consider one very simple example of how we reason from hypotheses for which we do not have evidence. We will look at the famous Goldbach conjectures formulated by Goldbach in 1742 in a letter to Euler. In that letter Goldbach conjectured these two simple arithmetical propositions. To state them we let Even be the type of even natural numbers, {2, 4, 6, 8, ...}, and Odd be the type of odd natural numbers, {1, 3, 5, 7, ...}. Let Prime be the type of all prime numbers, {2, 3, 5, 7, 11, ...}. We let OddPrime be the obvious type, dropping 2 from the type Prime. Now we can state the two Goldbach conjectures:

GBach-1: ∀n:Even. (n ≥ 6) ⇒ ∃p1,p2:OddPrime. (n = p1 + p2) Gbach-2: ∀n:Odd. (n ≥ 9) ⇒ ∃p1,p2,p3:OddPrime. (n = p1 + p2 + p3)

The logical operator p1, p2 : OddPrime is the symbolic way of saying that we can effectively find two odd prime numbers. We call the symbol the existential quantifier, and we will examine rules for it in due course. We can now prove this simple theorem relating the conjectures even though we do not have evidence for either one of them.

Fact: GBach-1Gbach-2

The informal proof is that we take any odd number n ≥ 9 and subtract 3 from it. We know that (n - 3) is even and (n - 3) ≥ 6. Thus by Gbach-1, we can find two odd prime numbers, say p1 and p2, such that (n - 3) = p1 + p2. Therefore, n = 3 + p1 + p2, establishing Gbach-2.

This simple little proof relates two unknown propositions by an operational implication. We can see the effective operation that finds the three needed primes using Gbach-1. In the exercises, we will examine other cases where we can build evidence for a proposition under the assumption that we have evidence for another one; this is the informal intuitive meaning of implication used in everyday life, in law, logic, mathematics, computer science, etc.

Proofs and the Deductive Method

To understand a proposition is to know what counts as evidence for it. Once we understand a proposition, the most direct approach to finding operational evidence to validate it is to assemble all the hypothetical evidence from the nested implications and work forward to build evidence for the conclusion. We work forward by applying operations that occur among the assumptions to the assumed data, e.g., when we see an assumption such as xi : A B and an assumption such as xj : A, we apply xi to xj to obtain evidence for B.

When this process is combinatorially complex because of the structure of the proposition, we benefit by structuring it around simple operations that systematically decompose hypothetical evidence and rules that systematically construct new evidence from these assumptions, and we try to apply them in an efficient way. This leads us quite naturally to the kind of inference rules that have shown themselves to be intuitive and which have been implemented in proof assistants.

We will illustrate this approach by providing a very simple proof system for PC[⊥,⇒] based around three rules for analyzing propositions and assembling the bits of evidence into sufficient evidence for the proposition we are trying to know. To motivate the rules let's look at how we find evidence for this proposition:

(A ⇒ B) ⇒ (B ⇒ C) ⇒ (A ⇒ C)

We see that our task is to assume (A B), so let's give a name to the operation we are allowed to assume, call it f, the operation that takes evidence for A into evidence for B. We need to use that operation to find evidence for (B C) (A C). To build that evidence, we are able to assume we have access to an operation g : (B C), and now we need to find evidence for (A C). To build this evidence, we also have access to evidence for A, call it x.

Now we can see that the goal is to build evidence for C using all of the assumptions. We can easily see that f (x) is evidence for B. To find evidence for C, we use g as follows: g (f (x)).

We can see a systematic method to this process. We decompose the hypotheses until we have names for all of them, and then we try to assemble the parts into evidence for the conclusion. The sequent style proof rules we present below are designed around this simple idea. We need only three basic rules to organize the entire process.

We organize the proof system in the style of Gentzen sequents [Gen69] presented in the top down style of refinement proofs and tactic style proofs [GMW79, BC85]. A sequent has the form (x1 : A1), … , (xn : An) G where the xi are labels of the assumptions Ai and G is the goal. In a precise sense a sequent is a generalized operation with labeled inputs xi which are the hypothetical evidence of the assumptions Ai and the output is an evidence term g (x1, … , xn) for the proposition G. The job of the prover (person or machine) is to assemble the evidence g (x1, … , xn) so that the complete sequent presents a function in the form (x1 : A1), … , (xn : An) G by g (x1, … , xn). The inference rules below provide the operators from which g is built. In Nuprl, these rules are applied using tactics, which are named methods of solving problems.

  1. Hypothesis Rule:

    (x1 : A1), … , (xn : An) Ai by xi

    This is the simplest inference rule. It is an axiom that says that if we have assumed Ai then we can claim that we have hypothetical evidence for it, given by its label as a hypothesis, and we can cite that label as a justification for the same formula as a conclusion. As an operation, the rule can be seen as simply selecting one input as the output of the function. Such a function is very simple. We sometimes call them projection functions, as in the notation fi (x1,…, xn) = xi. In Nuprl, this rule is applied using the tactic NthHyp i, where i is the index of the hypothesis that matches the goal.

  2. Construction Rule:

    H A B by λ(x. ___ )

    H, (x : A) B by ___

    This rule tells us how to build a function that is evidence for A B from a list of hypotheses H. As explained above, we can prove A B if we can produce evidence for B assuming hypothetical evidence for A. The format of this rule shows that when we are constructing a proof, we are proceeding top down so that we don't necessarily have the entire body of the function in the goal when we start. Instead we think of filling in details of the function as we attempt to prove the subgoals. So the rule has an unspecified element that we are waiting to fill in as the proof progresses. When the proof is finished this element will be completed on a "bottom up" pass through the completed proof tree using evidence terms in place of the empty slots. The unspecified element in this case would be replaced by some evidence term b(x) which provides evidence for B. The completed construction rule takes the following form:

    H A B by λ(x. b(x))

    H, (x : A) B by b(x)

    In Nuprl, this rule is applied using the tactic D 0, to decompose the conclusion, which is associated with index 0.

  3. Application Rule:

    H, f : (A B) G by ___

    H, f : (A B) A by ___
    H, f : (A B), (b : B) G by ___

    The final rule shows how to apply hypothetical operations. In order to apply the operation f to prove the goal G, we must prove two subgoals: (1) that we have evidence for A, and (2) that we can provide evidence for G assuming evidence for B.

    We find an evidence term a as evidence for A, and then evidence b for B comes from applying f to this evidence for A. We then find an evidence term g(b) as evidence for G. The completed application rule takes the following form:

    H, f : (A B) G by g (f (a))

    H, f : (A B) A by a
    H, f : (A B), (b : B) G by g(b).

    Function application can be displayed several ways. For these notes, the notations f (x), (f x), and ap(f; x) all represent the application of a function f to an argument x. In Nuprl proofs we apply operations using the tactic D i, where i is the index of the hypothesis containing the operation.

We see that proofs are constructed top down by either decomposing the goal formula G or by applying a hypothesis f : A B to some evidence term for A that is built in the proof process. The construction process produces a proof tree. We can see that the proof generation process can be made finite. We also see that invoking the hypothesis rule closes a branch in the proof tree.


On the next few pages we show how Nuprl applies these rules in proving the following theorems:

We leave the following theorem as an exercise:

Previous Page Next Page

Table of Contents