When we investigate logic, there is an immediate "logical issue," namely we must use logic to study any subject, including logic. So how do we start?
One solution is that everyday naive logic plus basic knowledge about elementary computing allows us to bootstrap our understanding — first to achieve a clearer sense of the naive logical primitives we normally use in everyday discourse. The new clarity is based on the intuitive computational meaning of the logical operators. Building on this first step, we can achieve a more precise understanding of extensions of the simplest logics to understand reasoning that requires finer distinctions about computation.
A computational understanding of logic has allowed computer scientists to build proof constructing programs and assemble software systems called proof assistants that execute the computations that logic requires. Those assistants in turn help people use formal logic to solve important mathematical and computational problems. Some of these problems concern logic itself.
A similar bootstrapping mechanism enables high-level programming languages to execute on low-level hardware. We use this approach to impart knowledge of logic and mathematics to proof assistants. This is an autocatalytic process, one that steadily improves itself. In these logical investigations we will use the bootstrapping process to achieve a technical understanding of logical systems and learn what it means to implement them and use them to formalize and solve mathematical and computational problems.
Propositions and Evidence
The central objects of logic are propositions or equivalently questions or problems. We are fundamentally interested in whether we can solve problems, answer questions, and find evidence for propositions. These are essentially equivalent tasks. To a first approximation, these tasks require precise declarative thoughts. In ordinary discourse they are the kind of thoughts that constitute assertions, statements or claims or they are used to pose questions.
For example, 1 = 1 is a very simple declarative thought about the natural number 1. We can ask the question "does 1 = 1?". This question is so primitive that we don't need to justify our answer once we see it. We call it a basic fact or a trivial question because we can immediately and intuitively see the answer once we understand the meaning of equality. Indeed we know from the very meaning of equality that every mathematical object is equal to itself, so the question has a trivial answer, yes by definition of equality.
We will avoid saying that a proposition is true, even for 1 = 1. We avoid speaking about "truth" because we are primarily concerned with the reasons we believe a proposition or the grounds we have for asserting it or the solutions we accept to a problem. These notions are all based on intuitive mental acts and our natural ability to experience them as solutions, reasons or evidence. When we understand a proposition and experience evidence for it, we say that the proposition is true based on this evidence.
Prior to "recognizing the evidence" or "seeing the solution" or "experiencing the evidence" or "finding the evidence," we need to understand the meaning of the proposition; that means we must know its sense. To understand an idea or a written expression of an idea such as 1 = 1 or any thought A, we need to first know its meaning or sense. For example, we probably don't know the sense of (0/0)! < 1, so we would not recognize evidence for asserting it or be able to produce evidence for believing it or answer the question (0/0)! < 1?. We call a thought or an attempted expression of a thought or formulation of a problem well formed if we can explain its sense. Such a thought or written expression of it is called a proposition or a meaningful question or a well defined problem.
Once we know that A is a proposition, we are in a position to seek evidence for believing it or asserting it. For some propositions, questions or problems, such as 0 = 1, we know that there is no solution or there cannot be evidence by the very sense of the thought. It is absurd that 0 = 1 for two natural numbers 0 and 1 by the very meaning of these numbers and the sense of equality. These very basic propositions, 0 = 0 and 0 = 1, suggest that we might want to have two atomic thoughts that are the most general examples of trivial propositions. So we introduce the proposition True to mean the proposition that has evidence by its very nature, trivial evidence. So we pick some symbolic representation as trivial evidence, say ★ or yes or obviously.
In a similar spirit of reducing our explanations to something very elementary, we can introduce False and say that this is a proposition that by definition has no correct or meaningful evidence. We could also say that any hypothetical evidence for False in a context of assumptions is a mistake. In this case we imagine that some hypothetical combination of ideas that we thought provided evidence for some other proposition A are wrong or mistaken and could be used to show False in the sense of revealing the mistake. For example, if someone claimed arithmetical evidence for 0 = 1, we would expect to be able to analyze that evidence and find a mistake in it, and that mistake once revealed is evidence for False.
A sufficiently interesting mathematical proposition is called a theorem. Very important theorems have names such as the Pythagorean Theorem or the Fundamental Theorem of Arithmetic or the Fundamental Theorem of Algebra. Less interesting ones may simply be called facts, e.g. it is a fact that 0 < 1 or even more basically that 0 = 0.
To know a proposition does not necessarily mean that we have a proof of it in the strict formal logical sense. A proof in the strict sense requires that we have specified the axioms and rules of inference that allow us to prove the proposition from the axioms.
Over the past three decades mathematicians and computer scientists have developed the concept of a publicly implemented formal proof [Con09]. This means that the proof is built by a person and a proof assistant working together in a logical system that is open to public scrutiny and available for general use.1 Moreover, the proof assistant is explained in detail in a users manual and in an expository book that is widely available. It also means that the formal proofs can be checked by independent programs written by interested parties who are not the ones who built and maintain the proof assistant. Furthermore it means that the formal proofs can be precisely related to informal proofs and to precise evidence assembled by other means, e.g., perhaps by a computer program that can execute the evidence terms.
In these notes we reserve the word formal proof of a proposition to apply to machine checked (or generated) proofs in specific completely defined systems of formal logic. We use the term informal proof for arguments that people believe could be made formal in some logic that they could specify. We will see that the idea of computational evidence can be made quite precise as well, using concepts from constructive logic and computing theory without translating them into a specific completely formal logic. We will see that computational evidence can be given in the form of a program and its data types. We can know enough to believe an assertion based on very precise computational evidence without knowing exactly how to formally prove it. Indeed, we think there will be an interesting activity in translating such evidence terms into proofs in a particular formal logic, perhaps into several of them.
This existential circumstance that we can know enough about a proposition to assert it and defend our knowledge claim with very precise computational evidence before we can turn that evidence into a precise proof is a novel aspect of modern mathematical knowledge that is new to the world. Perhaps the major "existential feature" of modern mathematical knowledge is that now we have proof assistants that can help us turn computational evidence into computer checked formal proofs. So the modern standard of proof has created a new category of knowledge called formal knowledge and a new system of integrating that into our informal knowledge.
Outline of the Notes
The typical modern account of logic in a textbook or lecture series starts with the propositional calculus, PC. This calculus is concerned with a few simple ways of combining atomic propositions using the propositional connectives of and, or, implies, and not. Accounts first define the formulas of the logic, then explain the meaning of the operators and the notion of evidence required for knowing propositions built from them. We start with an especially simple propositional logic that uses only the implication operator, denoted ⇒. The symbol ⊃ is used for implication in some logic books.
The topic usually covered next is called First-Order Logic (FOL), and it deals with two additional logical operators called quantifiers. These quantifiers are all and exists, as in saying "for all natural numbers there exists a larger one," symbolically we write ∀n:ℕ. ∃m:ℕ. n < m.
1 Proof assistants are also called interactive theorem provers or provers for short.
Table of Contents