Hilbert System Logic
Hilbert System Logic
The Hilbert proof systems are systems based on a language with implication
and contain a Modus Ponens rule as a rule of inference. They are usually called
Hilbert style formalizations. We will call them here Hilbert style proof systems,
or Hilbert systems, for short.
Modus Ponens is probably the oldest of all known rules of inference as it was
already known to the Stoics (3rd century B.C.). It is also considered as the
most natural to our intuitive thinking and the proof systems containing it as
the inference rule play a special role in logic. The Hilbert proof systems put
major emphasis on logical axioms, keeping the rules of inference to minimum,
often in propositional case, admitting only Modus Ponens, as the sole inference
rule.
There are many proof systems that describe classical propositional logic, i.e.
that are complete proof systems with the respect to the classical semantics.
We present here, after Elliott Mendelson’s book Introduction to Mathematical
Logic (1987), a Hilbert proof system for the classical propositional logic and
discuss two ways of proving the Completeness Theorem for it.
Any proof of the Completeness Theorem consists always of two parts. First we
have show that all formulas that have a proof are tautologies. This implication
is also called a Soundness Theorem, or soundness part of the Completeness
Theorem. The second implication says: if a formula is a tautology then it has a
proof. This alone is sometimes called a Completeness Theorem (on assumption
that the system is sound). Traditionally it is called a completeness part of the
Completeness Theorem.
The proof of the soundness part is standard. We concentrate here on the com-
pleteness part of the Completeness Theorem and present two proofs of it.
The first proof is based on the one presented in the Mendelson’s book Introduc-
tion to Mathematical Logic (1987). It is is a straightforward constrictive proof
that shows how one can use the assumption that a formula A is a tautology
in order to construct its formal proof. It is hence called a proof - construction
method. It is a beautiful proof
The second proof is non-constrictive. Its strength and importance lies in a fact
that the methods it uses can be applied to the proof of completeness for classical
1
predicate logic. We will discuss and apply them in Chapter ??.
It proves the completeness part of the Completeness Theorem by proving the
converse implication to it. It shows how one can deduce that a formula A is
not a tautology from the fact that it does not have a proof. It is hence called a
counter-model construction proof.
Both proofs of the Completeness Theorem relay on the Deduction Theorem and
so it is the first theorem we are going to prove.
1 Deduction Theorem
We consider first a very simple Hilbert proof system based on a language with
implication as the only connective, with two logical axioms (axiom schemas)
which characterize the implication, and with Modus Ponens as a sole rule of
inference. We call it a Hilbert system H1 and define it as follows.
A ; (A ⇒ B)
(M P ) ,
B
B1 , B2 , B3 , B4 , B5 (2)
as defined below.
B1 = ((A ⇒ ((A ⇒ A) ⇒ A)) ⇒ ((A ⇒ (A ⇒ A)) ⇒ (A ⇒ A))),
axiom A2 for A = A, B = (A ⇒ A), and C = A
B2 = (A ⇒ ((A ⇒ A) ⇒ A)),
axiom A1 for A = A, B = (A ⇒ A)
B3 = ((A ⇒ (A ⇒ A)) ⇒ (A ⇒ A))),
MP application to B1 and B2
2
B4 = (A ⇒ (A ⇒ A)),
axiom A1 for A = A, B = A
B5 = (A ⇒ A)
MP application to B3 and B4
Fact 1
For any A ∈ F,
`H1 (A ⇒ A)
and the sequence 2 constitutes its formal proof.
It is easy to see that the above proof wasn’t constructed automatically. The
main step in its construction was the choice of a proper form (substitution) of
logical axioms to start with, and to continue the proof with. This choice is far
from obvious for un-experienced prover and impossible for a machine, as the
number of possible substitutions is infinite.
Observe that the systems S1 − S4 from the previous Chapter 4 had inference
rules such that it was possible to ”reverse” their use; to use them in the reverse
manner in order to search for proofs, and we were able to do so in a blind,
fully automatic way. We were able to conduct an argument of the type: if
this formula has a proof the only way to construct it is from such and such
formulas by the means of one of the inference rules, and that formula can be
found automatically. We called proof systems with such property syntactically
decidable and defined them formally as follows.
We will argue now, that one can’t apply the above argument to the proof search
in Hilbert proof systems as they which contain Modus Ponens as an inference
rule.
A general procedure for searching for proofs in a proof system S can be stated
is as follows. Given an expression B of the system S. If it has a proof, it
must be conclusion of the inference rule. Let’s say it is a rule r. We find its
premisses, with B being the conclusion, i.e. we evaluate r−1 (B). If all premisses
3
are axioms, the proof is found. Otherwise we repeat the procedure for any non-
axiom premiss.
Search for proof in Hilbert Systems must involve the Modus Ponens. The rule
says: given two formulas A and (A ⇒ B) we can conclude a formula B. Assume
now that we have a formula B and want to find its proof. If it is an axiom, we
have the proof: the formula itself. If it is not an axiom, it had to be obtained
by the application of the Modus Ponens rule, to certain two formulas A and
(A ⇒ B). But there is infinitely many of formulas A and (A ⇒ B). I.e. for any
B, the inverse image of B under the rule M P , M P −1 (B) is countably infinit
Obviously, we have the following.
Fact 2
Any Hilbert proof system is not syntactically decidable, in particular, the system
H1 is not syntactically decidable.
Theorem 1 (Soundness of H1 )
For any A ∈ F of H1 , if `H1 A, then |= A.
Fact 3
The proof system H1 is sound, but not complete under the classical semantics.
4
Definition 2 (Proof from Hypotheses)
Given a proof system S = (L, E, LA, R) and let Γ be any set of expressions of
S, i.e. let Γ ⊆ E.
A proof of an expression E ∈ from the set Γ of expressions is a sequence
E1 , E2 , . . . En
E1 ∈ LA ∪ Γ, En = E
When the set of hypothesis Γ is a finite set and Γ = {B1 , B2 , ..., Bn }, then we
write
B1 , B2 , ..., Bn `S E
instead of {B1 , B2 , ..., Bn } `S E. The case when Γ is an empty set i.e. when
Γ = ∅ is a special one. By the definition of a proof of E from Γ, ∅ `S E means
that in the proof of E only logical axioms LA of S were used. We hence write
as we did before
`S E
to denote that E has a proof from the empty set Γ.
Definition 3 (Consequence in S)
Given a proof system S = (L, F, LA, R) and a set Γ ⊆ F. Any formula A ∈ F
provable from Γ, i.e. such that
Γ `S A
The following are simple, but very important properties of the notion of conse-
quence.
5
Fact 4 (Consequence Properties)
Given a proof system S = (L, F, LA, R). For any sets Γ, ∆ ⊆ F the following
holds.
1. If Γ ⊆ ∆ and Γ `S A, then ∆ `S A. monotonicity
2. Γ `S A if and only if
there is a finite subset Γ0 of Γ such that Γ0 `S A. finiteness
3. If ∆ `S A, and, for each B ∈ ∆, Γ `S B, then Γ `S A. transitivity
Proof
The properties follow directly from the definition 2 and their proofs are left to
the reader as an exercise.
The monotonicity property represents the fact that if a formula A is provable
from a set Γ of premisses (hypotheses), then if we add still more premisses, A
is still provable. It hence is often stated as follows,
Exercise 1
Construct a proof in H1 of a formula (A ⇒ C) from the set of hypotheses
Γ = {(A ⇒ B), (B ⇒ C)}. I.e. show that
Solution
The required formal proof is a sequence
B1 , B2 , .....B7 (4)
such that
B1 = (B ⇒ C),
hypothesis
B2 = (A ⇒ B),
hypothesis
B3 = ((B ⇒ C) ⇒ (A ⇒ (B ⇒ C))),
axiom A1 for A = (B ⇒ C), B = A
6
B4 = (A ⇒ (B ⇒ C))
B1 , B3 and MP
B5 = ((A ⇒ (B ⇒ C)) ⇒ ((A ⇒ B) ⇒ (A ⇒ C))),
axiom A2
B6 = ((A ⇒ B) ⇒ (A ⇒ C)),
B5 and B4 and MP
B7 = (A ⇒ C).
B2 and B6 and MP
Exercise 2
Show, by constructing a formal proof that A `H1 (A ⇒ A).
Solution
The required formal proof is a sequence
B1 , B2 , B3 (5)
such that
B1 = A,
hypothesis
B2 = (A ⇒ (A ⇒ A)),
axiom A1 for B = A,
B3 = (A ⇒ A)
B1 , B2 and MP.
We can further simplify the task of constructing formal proofs in H1 by the use
of the following Deduction Theorem.
In mathematical arguments, one often assumes a statement A on the assumption
(hypothesis) of some other statement B and then concludes that we have proved
the implication ”if A, then B”. This reasoning is justified by the following
theorem, called a Deduction Theorem. It was first formulated and proved for a
certain Hilbert proof system S for the classical propositional logic by Herbrand
in 1930 in a form stated below.
We are going to prove now that for our system H1 is strong enough to prove
the Herbrand Deduction Theorem for it. In fact we formulate and prove a more
general version of the Theorem 2.
7
To formulate it we introduce the following notation. We write Γ, A `S B for Γ ∪
{A}`S B, and in general we write Γ, A1 , A2 , ..., An `S B for Γ∪{A1 , A2 , ..., An }`S B.
We are now going to prove the following.
In particular,
A `H1 B if and only if `H1 (A ⇒ B).
Proof
We use we use the symbol ` instead of `H1 . for simplicity.
Part 1
We first prove the ”if” part:
If Γ, A ` B then Γ ` (A ⇒ B).
B1 , B2 , ..., Bn (6)
Γ ` (A ⇒ B).
8
{A1, A2} ∪ Γ, hence we get the required proof of (A ⇒ B1 ) from Γ by the
following application of the Modus Ponens rule
B1 ; (B1 ⇒ (A ⇒ B1 ))
(M P ) .
(A ⇒ B1 )
Case 2. B1 = A.
When B1 = A, then to prove Γ ` (A ⇒ B) means to prove Γ ` (A ⇒ A). This
holds by the monotonicity of the consequence in H1 (Fact 4), and the fact that
we have proved (Fact 1) that `(A ⇒ A). The above cases conclude the proof of
the Base case i = 1.
Inductive step
Assume that Γ `(A ⇒ Bk ) for all k < i, we will show that using this fact we
can conclude that also Γ `(A ⇒ Bi ).
Consider a formula Bi in the sequence 6. By the definition, Bi ∈ {A1, A2} ∪ Γ ∪
{A} or Bi follows by MP from certain Bj , Bm such that j < m < i. We have
to consider again two cases.
Case 1. Bi ∈ {A1, A2} ∪ Γ ∪ {A}.
The proof of (A ⇒ Bi ) from Γ in this case is obtained from the proof of the
Base Step for i = 1 by replacement B1 by Bi and will be omitted here as a
straightforward repetition.
Case 2. Bi is a conclusion of MP.
If Bi is a conclusion of MP, then we must have two formulas Bj , Bm in the
sequence 6 such that j < i, m < i, j 6= m and
Bj ; Bm
(M P ) .
Bi
By the inductive assumption, the formulas Bj , Bm are such that
Γ ` (A ⇒ Bj ) (7)
and
Γ ` (A ⇒ Bm ). (8)
Moreover, by the definition of the Modus Ponens rule, the formula Bm has to
have a form (Bj ⇒ Bi ), i.e. Bm = (Bj ⇒ Bi ), and the the inductive assumption
(8) can be re-written as follows.
9
is a substitution of the axiom schema A2 and hence has a proof in our system.
By the monotonicity of the consequence (3), it also has a proof from the set Γ,
i.e.
Γ ` ((A ⇒ (Bj ⇒ Bi )) ⇒ ((A ⇒ Bj ) ⇒ (A ⇒ Bi ))). (10)
Applying the rule MP to formulas (10) and (9,) i.e. performing the following
Applying again the rule MP to formulas 7 and 11, i.e. performing the following
(A ⇒ Bj ) ; ((A ⇒ Bj ) ⇒ (A ⇒ Bi ))
(M P )
(A ⇒ Bi )
we get that
Γ `(A ⇒ Bi )
what ends the proof of the Inductive Step. By the mathematical induction
principle, we hence have proved that Γ `(A ⇒ Bj ) for all i such that 1 ≤ i ≤ n.
In particular it is true for i = n, what means for Bn = B and we have proved
that
Γ ` (A ⇒ B).
This ends the proof of the Part 1.
Part 2
The proof of the inverse implication
if Γ ` (A ⇒ B) then Γ, A `B
is straightforward. Assume that Γ ` (A ⇒ B) , hence by the monotonicity of
the consequence (3) we have also that Γ, A ` (A ⇒ B). Obviously, Γ, A ` A.
Applying Modus Ponens to the above, we get the proof of B from {Γ, A} i.e.
we have proved that Γ, A ` B. That ends the proof of the deduction theorem
for any set Γ ⊆ F and any formulas A, B ∈ F. The particular case is obtained
from the above by assuming that the set Γ is empty. This ends the proof of the
Deduction Theorem for H1 .
The proof of the following useful lemma provides a good example of multiple
applications of the Deduction Theorem 3.
10
Lemma 1
For any A, B, C ∈ F,
Proof of (a).
Deduction theorem says:
(A ⇒ B), (B ⇒ C) `H1 (A ⇒ C) if and only if (A ⇒ B), (B ⇒ C), A `H1 C.
We construct a formal proof
B1 , B2 , B3 , B4 , B5
B1 = (A ⇒ B),
hypothesis
B2 = (B ⇒ C),
hypothesis
B3 = A,
hypothesis
B4 = B,
B1 , B3 and MP
B5 = C.
B2 , B4 and MP
Proof of (b).
By Deduction Theorem,
B1 , B2 , B3 , B4 , B5 , B6 , B7
11
B1 = (A ⇒ (B ⇒ C)),
hypothesis
B2 = B,
hypothesis
B3 = ((B ⇒ (A ⇒ B)),
A1 for A = B, B = A
B4 = (A ⇒ B),
B2 , B3 and MP
B5 = ((A ⇒ (B ⇒ C)) ⇒ ((A ⇒ B) ⇒ (A ⇒ C))),
axiomA2
B6 = ((A ⇒ B) ⇒ (A ⇒ C)),
B1 , B5 and MP
B7 = (A ⇒ C).
Hilbert System H2
The proof system H1 is sound and strong enough to admit the Deduction The-
orem, but is not completeas proved in Fact 3. We define now a proof system H2
that is complete with respect to classical semantics. The proof of Completeness
Theorem for H2 is to be presented in the next section.
H2 is defined as follows.
A ; (A ⇒ B)
H2 = ( L{¬, ⇒} , F, A1, A2, A3, M P ), (12)
B
where for any formulas A, B, C ∈ F of L{¬, ⇒} we define
A1 (A ⇒ (B ⇒ A)),
A2 ((A ⇒ (B ⇒ C)) ⇒ ((A ⇒ B) ⇒ (A ⇒ C))),
A3 ((¬B ⇒ ¬A) ⇒ ((¬B ⇒ A) ⇒ B))),
Observation 1 Here are some simple, straightforward facts about the proof
system H2 .
1. The language of H2 is obtained from the language of H1 by adding the
connective ¬ to it.
2. H2 is obtained from H1 by adding axiom to it the axiom A3 that characterizes
negation.
12
3. The use of axioms A1, A2 in the proof of Deduction Theorem 3 for H1 is
independent of the negation connective ¬ added to the language of H1 .
4. The proof of Deduction Theorem 3 for the system H1 can be repeated as it is
for the system H2 .
In particular,
A `H2 B if and only if `H2 (A ⇒ B).
Observe that for the same reason the Lemma 1 holds also for H2 . It is a very i
useful lemma for creating proofs in H2 so we re-state it for it here.
Lemma 2
For any A, B, C ∈ F,
We know that the axioms A1, A2 are tautologies and the Modus Ponens rule is
sound. We get by simple verification that |= A3, hence the proof system H2 is
sound, and the following holds.
The soundness theorem proves that the system ”produces” only tautologies. We
show, in the next chapter, that our proof system H2 ”produces” not only tau-
tologies, but that all tautologies are provable in it. This is called a completeness
theorem for classical logic.
13
The proof of completeness theorem (for a given semantics) is always a main
point in any logic creation. There are many ways (techniques) to prove it,
depending on the proof system, and on the semantics we define for it.
We present in the next sections two proofs of the completeness theorem for
our system H2 . The proofs use very different techniques, hence the reason of
presenting both of them. Both proofs relay heavily on some of the formulas
proved in the next section 1.1 and stated in Lemma 3.
We present here some examples of formal proofs in H2 . There are two reasons for
presenting them. First reason is that all formulas we prove here to be provable
play a crucial role in the proof of Completeness Theorem for H2 , or are needed
to find formal proofs of those needed. The second reason is that they provide a
”training” ground for a reader to learn how to develop formal proofs. For this
second reason we write some proofs in a full detail and we leave some others for
the reader to complete in a way explained in the following example.
We write, were needed ` instead of `H2 .
Example 1
We prove that
`H2 (¬¬B ⇒ B) (13)
by constructing its formal proof B1 , . . . , B5 , B6 as follows.
B1 = ((¬B ⇒ ¬¬B) ⇒ ((¬B ⇒ ¬B) ⇒ B)),
B2 = ((¬B ⇒ ¬B) ⇒ ((¬B ⇒ ¬¬B) ⇒ B)),
B3 = ¬B ⇒ ¬B),
B4 = ((¬B ⇒ ¬¬B) ⇒ B),
B5 = ¬¬B ⇒ (¬B ⇒ ¬¬B)),
B6 = (¬¬B ⇒ B).
Exercise 3
Complete the proof B1 , . . . , B5 , B6 of (83) by providing comments how each
step of the proof was obtained.
Solution
The proof of (83) with comments complementing it is as follows.
14
B1 = ((¬B ⇒ ¬¬B) ⇒ ((¬B ⇒ ¬B) ⇒ B)),
axiom A3 for A = ¬B, B = B
B3 = (¬B ⇒ ¬B),
Fact 1 for A = ¬B
B6 = (¬¬B ⇒ B)
B4 , B5 and Lemma 2 a for A = ¬¬B, B = (¬B ⇒ ¬¬B), C = B.
Lemma 2 application is:
(¬¬B ⇒ (¬B ⇒ ¬¬B)), ((¬B ⇒ ¬¬B) ⇒ B) ` (¬¬B ⇒ B)
Remark 1
Observe that in In step B2 , B3 , B5 , B6 of the proof B1 , . . . , B5 , B6 we call
previously proved results and use their results as a part of our proof. We can
insert previously constructed formal proofs of the results we call upon into our
formal proof.
15
B3 = ((¬B ⇒ ((¬B ⇒ ¬B) ⇒ ¬B)) ⇒ ((¬B ⇒ (¬B ⇒ ¬B)) ⇒ (¬B ⇒
¬B))), (new proof of B3 inserted )
axiom A2 for A = ¬B, B = (¬B ⇒ ¬B), and C = ¬B
B4 = (¬B ⇒ ((¬B ⇒ ¬B) ⇒ ¬B)),
axiom A1 for A = ¬B, B = (¬B ⇒ ¬B)
We repeat our procedure by replacing the step B2 by its formal proof as defined
in the proof of the Lemma 1 b, and continue the process for all other steps
which involved application of Lemma 2 until we get a full formal proof from
the axioms of H2 only.
Usually we don’t need to do it, but it is important to remember that it always
can be done, if we wished to take time and space to do so.
Example 2
We prove that
`H2 (B ⇒ ¬¬B) (14)
by constructing its formal proof B1 , . . . , B5 as follows.
B3 = ((¬¬¬B ⇒ B) ⇒ ¬¬B),
B4 = (B ⇒ (¬¬¬B ⇒ B)),
B5 = (B ⇒ ¬¬B).
16
Exercise 4
Complete the proof B1 , . . . , B5 of (85) by providing comments how each step
of the proof was obtained.
Solution
The proof of (85) with comments complementing it is as follows.
B4 = (B ⇒ (¬¬¬B ⇒ B)),
axiom A1 for A = B, B = ¬¬¬B
B5 = (B ⇒ ¬¬B),
B3 , B4 and Lemma 2a for A = B, B = (¬¬¬B ⇒ B), C = ¬¬B, i.e.
Example 3
We prove that
`H2 (¬A ⇒ (A ⇒ B)) (15)
by constructing its formal proof B1 , . . . , B12 as follows.
B1 = ¬A,
B2 = A,
B3 = (A ⇒ (¬B ⇒ A)),
B4 = (¬A ⇒ (¬B ⇒ ¬A)),
B5 = (¬B ⇒ A),
B6 = (¬B ⇒ ¬A),
B7 = ((¬B ⇒ ¬A) ⇒ ((¬B ⇒ A) ⇒ B)),
B8 = ((¬B ⇒ A) ⇒ B),
17
B9 = B,
B10 = ¬A, A ` B,
B11 = ¬A ` (A ⇒ B),
B12 = (¬A ⇒ (A ⇒ B)).
Example 4
We prove that
`H2 ((¬B ⇒ ¬A) ⇒ (A ⇒ B)) (16)
by constructing its formal proof B1 , . . . , B7 as follows. Here are consecutive
steps
B1 = (¬B ⇒ ¬A),
B5 = (A ⇒ B),
B6 = (¬B ⇒ ¬A) ` (A ⇒ B),
B7 = ((¬B ⇒ ¬A) ⇒ (A ⇒ B)).
Example 5
We prove that
`H2 ((A ⇒ B) ⇒ (¬B ⇒ ¬A)) (17)
by constructing its formal proof B1 , . . . , B9 as follows. Here are consecutive
steps
B1 = (A ⇒ B),
B2 = (¬¬A ⇒ A),
B3 = (¬¬A ⇒ B),
B4 = (B ⇒ ¬¬B),
B5 = (¬¬A ⇒ ¬¬B),
B6 = ((¬¬A ⇒ ¬¬B) ⇒ (¬B ⇒ ¬A)),
B7 = (¬B ⇒ ¬A),
18
B8 = (A ⇒ B) ` (¬B ⇒ ¬A),
B9 = ((A ⇒ B) ⇒ (¬B ⇒ ¬A)).
Exercise 5
Complete the proof B1 , . . . , B9 of (17) by providing comments how each step
of the proof was obtained.
Solution
The proof of (17) with comments complementing it is as follows.
B1 = (A ⇒ B),
hypothesis
B2 = (¬¬A ⇒ A),
Example 10 for B = A
B3 = (¬¬A ⇒ B),
Lemma 2 a for A = ¬¬A, B = A, C = B
B4 = (B ⇒ ¬¬B),
Example 11
B5 = (¬¬A ⇒ ¬¬B),
Lemma 2 a for A = ¬¬A, B = B, C = ¬¬B
B6 = ((¬¬A ⇒ ¬¬B) ⇒ (¬B ⇒ ¬A)),
Example 4 for B = ¬A, A = ¬B
B7 = (¬B ⇒ ¬A),
B5 , B6 and MP
B8 = (A ⇒ B) ` (¬B ⇒ ¬A),
B1 − B7
B9 = ((A ⇒ B) ⇒ (¬B ⇒ ¬A)).
Deduction Theorem 31
Example 6
We prove that
`H2 ((A ⇒ B) ⇒ ((¬A ⇒ B) ⇒ B)) (18)
by constructing its formal proof B1 , . . . , B12 as follows. Here are consecutive
steps.
B1 = (A ⇒ B),
B2 = (¬A ⇒ B),
19
B3 = ((A ⇒ B) ⇒ (¬B ⇒ ¬A)),
B4 = (¬B ⇒ ¬A),
B5 = ((¬A ⇒ B) ⇒ (¬B ⇒ ¬¬A)),
B6 = (¬B ⇒ ¬¬A),,
B7 = ((¬B ⇒ ¬¬A) ⇒ ((¬B ⇒ ¬A) ⇒ B))),
B8 = ((¬B ⇒ ¬A) ⇒ B),
B9 = B,
B10 = (A ⇒ B), (¬A ⇒ B) ` B,
B11 = (A ⇒ B) ` ((¬A ⇒ B) ⇒ B),
B12 = ((A ⇒ B) ⇒ ((¬A ⇒ B) ⇒ B)).
Exercise 6
Complete the proof B1 , . . . , B12 of (18) by providing comments how each step
of the proof was obtained.
Solution
The proof of (18) with comments complementing it is as follows.
B1 = (A ⇒ B),
hypothesis
B2 = (¬A ⇒ B),
hypothesis
B3 = ((A ⇒ B) ⇒ (¬B ⇒ ¬A)),
Example 5
B4 = (¬B ⇒ ¬A),
B1 , B3 and MP
B5 = ((¬A ⇒ B) ⇒ (¬B ⇒ ¬¬A))
Example 5 for A = ¬A, B = B
B6 = (¬B ⇒ ¬¬A),
B2 , B5 and MP
B7 = ((¬B ⇒ ¬¬A) ⇒ ((¬B ⇒ ¬A) ⇒ B))),
axiom A3 for B = B, A = ¬A
B8 = ((¬B ⇒ ¬A) ⇒ B),
B6 , B7 and MP
20
B9 = B,
B4 , B8 and MP
B10 = (A ⇒ B), (¬A ⇒ B)`H2 B,
B1 − B9
Example 7
We prove that
`H2 ((¬A ⇒ A) ⇒ A) (19)
by constructing its formal proof B1 , B2 , B3 as follows. Here are consecutive
steps.
Exercise 7
Complete the proof B1 , B2 , B3 of (19) by providing comments how each step
of the proof was obtained.
Solution
The proof of (19) with comments complementing it is as follows.
B3 = ((¬A ⇒ A) ⇒ A)).
B1 , B2 and MP
The above Examples 10 - 7 and the Fact 1 provide a proof of the following
lemma.
Lemma 3
For any formulas A, B, CinF of the system H2 the following holds.
21
1. `H2 (A ⇒ A);
2. `H2 (¬¬B ⇒ B);
3. `H2 (B ⇒ ¬¬B);
The set of provable formulas from the above Lemma 3 includes a set of provable
formulas needed, with H2 axioms to execute two proofs of the Completeness
Theorem 6 for H2 . These two proofs represent two very different methods of
proving Completeness Theorem.
Proof One, the first proof of the Completeness Theorem 6 presented here is
very elegant and simple, but is applicable only to the classical propositional
logic. Methods it uses are specific to a propositional language L{¬, ⇒} and
the proof system H2 . Nevertheless, it can be adopted and extended to other
classical propositional languages L{¬, ∪, ⇒} , L{¬, ∩, ∪,⇒} , L{¬, ∩, ∪,⇒,⇔} , and
proof systems based on them. We do so by adding appropriate new logical
axioms to the logical axioms of H2 (section 2.1). Such obtained proof systems
are called extentions of the system H2 . It means that one can think about the
system H2 , i.e. an axiomatization given by set {A1, A2, A3} of logical axioms
of H2 , and its language L{¬, ⇒} as in a sense, a ”minimal one” for classical
propositional logic and its languages that contain implication.
Proof One, i.e. the methods of carrying it, can’t be extended to the classical
predicate logic, not to mention variety of non-classical logics. Hence we present,
22
in the next section3 another, more general proof, called Proof Two, that can.
if |= A, then `S A. (20)
In order to prove (20), i.e. to prove that any tautology has a formal proof in
H2 , we need first to present one definition and prove one lemma stated below.
We write ` A instead of `H2 A, as the system H2 is fixed.
Definition 4
Let A be a formula and b1 , b2 , ..., bn be all propositional variables that occur in A.
Let v be variable assignment v : V AR −→ {T, F }. We define, for A, b1 , b2 , ..., bn
and v a corresponding formulas A0 , B1 , B2 , ..., Bn as follows:
if v ∗ (A) = T
A
A0 =
¬A if v ∗ (A) = F
bi if v(bi ) = T
Bi =
¬bi if v(bi ) = F
for i = 1, 2, ..., n.
Example 8
Let A be a formula
(a ⇒ ¬b) (21)
and let v be such that
v(a) = T, v(b) = F. (22)
∗ ∗
In this case b1 = a, b2 = b, and v (A) = v (a ⇒ ¬b) = v(a) ⇒ ¬v(b)=
T ⇒ ¬F = T. The corresponding A0 , B1 , B2 are: A0 = A (as v ∗ (A) = T ),
B1 = a (as v(a) = T ), B2 = ¬b (as v(b) = F ).
Exercise 8
Let A be a formula ((¬a ⇒ ¬b) ⇒ c) and let v be such that v(a) = T, v(b) =
F, v(c) = F.
Evaluate A0 , B1 , ...Bn as defined by the definition 4.
23
Solution
In this case n = 3 and b1 = a, b2 = b, b3 = c, and v ∗ (A) = v ∗ ((¬a ⇒ ¬b) ⇒ c)
=((¬v(a) ⇒ ¬v(b)) ⇒ v(c)) = ((¬T ⇒ ¬F ) ⇒ F ) = (T ⇒ F ) = F . The
corresponding A0 , B1 , B2 , B2 are: A0 = ¬A = ¬((¬a ⇒ ¬b) ⇒ c) (as v ∗ (A) =
F ), B1 = a (as v(a) = T ), B2 = ¬b (as v(b) = F ). B3 = ¬c (as v(c) = F ).
The lemma stated below describes a method of transforming a semantic notion
of a tautology into a syntactic notion of provability. It defines, for any formula
A and a variable assignment v a corresponding deducibility relation `.
B1 , B2 , ..., Bn ` A0 . (23)
The Main Lemma 4 states: for any formula A and a variable assignment v, if
0
A , B1 , B2 , ..., Bn are corresponding formulas defined by Definition 4, then
B1 , B2 , ..., Bn ` A0 .
Case: n = 0
In the case that n = 0 A is atomic and so consists of a single propositional
variable, say a. We have to cases to consider, v ∗ (A) = T or v ∗ (A) = F .
Clearly, if v ∗ (A) = T then we A0 = A = a, B1 = a, and a ` a holds by
the Deduction Theorem and ??. I.e. ` (a ⇒ a) holds by ??). Applying
the the Deduction Theorem we get a ` a.
If v ∗ (A) = F then we A0 = ¬A = ¬a, B1 = ¬a, and ` (¬a ⇒ ¬a) holds
by Lemma 3. Applying the the Deduction Theorem we get ¬a ` ¬a. So
the lemma holds for the case n = 0.
Now assume that the lemma holds for any A with j < n logical connectives
(any A of the degree j < n). The goal is to prove that it holds for A with the
24
degree n.
There are several sub-cases to deal with.
Case: A is ¬A1
If A is of the form ¬A1 then A1 has less then n connectives and by the
0
inductive assumption we have the formulas A1 , B1 , B2 , ..., Bn corre-
sponding to the A1 and the propositional variables b1 , b2 , ..., bn in A1 , as
defined by the definition 4, such that
0
B1 , B2 , ..., Bn ` A1 . (24)
Observe, that the formulas A and ¬A1 have the same propositional vari-
ables, so the corresponding formulas B1 , B2 , ..., Bn are the same for both
of them. We are going to show that the inductive assumption (24) allows
us to prove that the lemma holds for A, ie. that
0
B1 , B2 , ..., Bn ` A .
25
0
B1 , B2 , ..., Bn ` A2 , where B1 , B2 , ..., Bn are formulas corresponding to
the propositional variables in A.
Now we have the following sub-cases to consider.
With that we have covered all cases and, by induction on n, the proof of the
lemma is complete.
26
By the Main Lemma 4 and the assumption that |= A any v ∈ VA defines formulas
B1 , B2 , ..., Bn such that
B1 , B2 , ..., Bn ` A. (28)
The proof is based on a method of using all v ∈ VA to define a process of
elimination of all hypothesis B1 , B2 , ..., Bn in (28) to finally construct the proof
of A in H2 i.e. to prove that ` A.
Step 1: elimination of Bn .
Observe that by definition 4, each Bi is bi or ¬bi depending on the choice
of v ∈ VA . In particular Bn = bn or Bn = ¬bn . We choose two truth
assignments v1 6= v2 ∈ VA such that
v1 |{b1 , ..., bn−1 } = v2 |{b1 , ..., bn−1 } (29)
and v1 (bn ) = T and v2 (bn ) = F .
Case 1: v1 (bn ) = T , by definition 4 Bn = bn . By the property (29), assump-
tion that |= A, and the Main Lemma 4 applied to v1
B1 , B2 , ..., Bn−1 , bn ` A.
By Deduction Theorem 3 we have that
B1 , B2 , ...Bn−1 , ¬bn ` A.
By the Deduction Theorem 3 we have that
Applying Modus Ponens twice to the above property (32) and properties (30),
(31) we get that
B1 , B2 , ..., Bn−1 ` A. (33)
We have eliminated Bn .
27
Step 2: elimination of Bn−1 from (33). We repeat the Step 1.
As before we have 2 cases to consider: Bn−1 = bn−1 or Bn−1 = ¬bn−1 .
We choose two truth assignments w1 6= w2 ∈ VA such that
w1 |{b1 , ..., bn−2 } = w2 |{b1 , ..., bn−2 } = v1 |{b1 , ..., bn−2 } = v2 |{b1 , ..., bn−2 }
(34)
and w1 (bn−1 ) = T and w2 (bn−1 ) = F .
As before we apply Main Lemma, Deduction Theorem, monotonicity,
proper substitutions of the formula ((A ⇒ B) ⇒ ((¬A ⇒ B) ⇒ B)),
and Modus Ponens twice and eliminate Bn−1 just as we eliminated Bn .
After n steps, we finally obtain that
` A.
Observe that our proof of the fact that ` A is a constructive one. Moreover,
we have used in it only Main Lemma 4 and Deduction Theorem 3, and both
of them have fully constructive proofs. So we can always reconstruct all steps
in proofs which use the Main Lemma 4and Deduction Theorem 3 back to the
original axioms of H2 . The same applies to the proofs that use the formulas
proved in H2 that are stated in Lemma 3.
It means that for any A ∈ F, such that |= A, the set VA of all v restricted to
A provides us a method of a construction of the formal proof of A in H2 from
its axioms A1, A2, A3 only. .
2.1 Examples
Example 10
As an example of how the Proof One of the Completeness Theorem works, we
consider a following tautology
|= (a ⇒ (¬a ⇒ b))
` (a ⇒ (¬a ⇒ b)).
28
Case 1: v(a) = T, v(b) = T .
0
In this case B1 = a, B2 = b and, as in all cases, A = A and by the Main
Lemma 4
a, b ` (a ⇒ (¬a ⇒ b)).
a, ¬b ` (a ⇒ (¬a ⇒ b)).
D1 (Cases 1 and 2)
a ` (b ⇒ (a ⇒ (¬a ⇒ b))),
a ` (¬b ⇒ (a ⇒ (¬a ⇒ b))),
D2 (Cases 2 and 3)
¬a ` (b ⇒ (a ⇒ (¬a ⇒ b))),
¬a ` (¬b ⇒ (a ⇒ (¬a ⇒ b))).
Applying Modus Ponens twice to D1, D2 and these above, respectively, gives
us
a ` (a ⇒ (¬a ⇒ b)) and
¬a ` (a ⇒ (¬a ⇒ b)).
Applying the Deduction Theorem 3 to the above we obtain
29
D3 ` (a ⇒ (a ⇒ (¬a ⇒ b))),
D4 ` (¬a ⇒ (a ⇒ (¬a ⇒ b))).
We apply Modus Ponens twice to D3 and D4 and (58) and get finally the proof
of (a ⇒ (¬a ⇒ b)), i.e. we have proved that
` (a ⇒ (¬a ⇒ b)).
Example 11
The Proof One of Completeness Theorem defines a method of efficiently combin-
ing v ∈ VA as defined in (27), while constructing the proof of A. Let’s consider
the following tautology A = A(a, b, c)
By the Main Lemma 4 and the assumption that |= A(a, b, c) any v ∈ VA defines
formulas Ba , Bb , Bc such that
Ba , Bb , Bc ` A. (37)
Step 1: elimination of Bc .
Observe that by definition 4, Bc is c or ¬c depending on the choice of
v ∈ VA . We choose two truth assignments v1 6= v2 ∈ VA such that
Ba , Bb , c ` A.
30
By Deduction Theorem 3 we have that
Ba , Bb ` (c ⇒ A). (39)
Ba , Bb , ¬c ` A.
Applying Modus Ponens twice to the above property (41) and properties (39),
(40) we get that
Ba , Bb ` A. (42)
and hence we have eliminated Bc .
Ba , b ` A.
Ba ` (b ⇒ A). (44)
31
Case 2: w2 (c) = F hence by definition 4 Bb = ¬b. By the property (3),
assumption that |= A, and the Main Lemma 4 applied to w2
Ba , ¬b ` A.
Applying Modus Ponens twice to the above property (46) and properties (44),
(45) we get that
Ba ` A. (47)
and hence we have eliminated Bb .
a ` A.
` (a ⇒ A). (49)
¬a ` A.
32
By Lemma 3, i.e. provability of the formula (35) for A = a, B = A we have that
Applying Modus Ponens twice to the above property (51) and properties (49),
(50) we get that
` A. (52)
and hence we have eliminated Ba , Bb and Bc and constructed the proof of A.
For any of the formulas listed below construct their formal proofs, as
described in the Proof One of the Completeness Theorem. Follow exam-
ple 10, or example 11.
5. A1 = (¬¬b ⇒ b)
6. A2 = ((a ⇒ b) ⇒ (¬b ⇒ ¬a))
33
7. A3 = (¬(a ⇒ b) ⇒ ¬(¬b ⇒ ¬a))
8. A4 = (¬(¬(a ⇒ ¬b) ⇒ ¬c) ⇒ ¬(b ⇒ ¬c))
9. A5 = ((a ⇒ (b ⇒ ¬a)) ⇒ (¬(b ⇒ ¬a) ⇒ ¬a)).
10. List all formulas that have to be provable in H2 , axioms included, that
are are needed for the proof of Deduction Theorem 3. Write down each
part of the proof that uses them.
11. List all formulas that have to be provable in H2 , axioms included, that
are needed for the proof of Main Lemma 4.
12. List all formulas that have to be provable in H2 , axioms included, that are
included in the Proof of Completeness Theorem part of the Proof One.
13. List all formulas that have to be provable in H2 , axioms included, that
are needed to carry all of the Proof One of Completeness Theorem ??.
14. We proved the Completeness Theorem for the proof system H2 based on
the language L{¬,⇒} . Extend the H2 proof system to a proof system S1
based on a language L{¬,⇒,∪} by adding new logical axioms, as we did in
a case of H1 and H2 systems. The added logical axioms must be such that
they allow to adopt the Proof One to S1 , i.e. such that it is a complete
proof system with respect to classical semantics.
15. Repeat the same for the language L{¬,⇒,∩} . Call resulting proof system
S2 .
16. Repeat the same for the language L{¬,⇒,∩,∪} , i.e. extends systems S1 or
S2 to a complete proof system S3 based on the language L{¬,⇒,∩,∪} .
17. Prove Completeness Theorem for the system S3 from the previous prob-
lem.
34
it possible to adopt it for other cases of predicate and some non-classical logics.
We call it a a counter-model existence method.
We will show now how one can define of a counter-model for A from the fact
that A is not provable. This means that we deduce that a formula A is not
a tautology from the fact that it does not have a proof. We hence call it a a
counter-model existence method.
The definition of the counter-model for any non-provable A is much more general
(and less constructive) then in the case of the Proof One in section 2. It can
be generalized to the case of predicate logic, and many of non-classical logics;
propositional and predicate. It is hence a much more general method then the
first one and this is the reason we present it here.
We remind that 6|= A means that there is a truth assignment v : V AR −→
{T, F }, such that v ∗ (A) 6= T , i.e. in classical semantics, such that that v ∗ (A) =
F . Such v is called a counter-model for A, hence the proof provides a counter-
model construction method.
Since we assume in (53) that A does not have a proof in H2 (6` A) the method
uses this information in order to show that A is not a tautology, i.e. to define v
such that v ∗ (A) = F . We also have to prove that all steps in that method are
correct. This is done in the following steps.
Step 1: Definition of ∆∗
We use the information 6` A to define a special set ∆∗ ⊆ F, such that
¬A ∈ ∆∗ .
Step 2: Counter - model definition
We define the truth assignment v : V AR −→ {T, F } as follows:
if ∆∗ ` a
T
v(a) =
F if ∆∗ ` ¬a.
35
We first prove a more general property, namely we prove that the set ∆∗
and v defined in the steps 1 and 2, respectively, are such that for every
formula B ∈ F,
if ∆∗ ` B
T
v ∗ (B) =
F if ∆∗ ` ¬B.
The definition and the properties of the set ∆∗ , and hence the Step 1, are the
most essential for the proof. The other steps have mainly technical character.
The main notions involved in the Step 1 (definition of ∆∗ ) are: consistent set,
complete set and a consistent complete extension of a set. We are going now to
introduce them and to prove some essential facts about them.
In our Proof Two of the Completeness Theorem we use assumption that a given
formula A does not have a proof to deduce that A is not a tautology. We hence
use the following syntactical definition of consistency.
Consistent set
We say that a set ∆ ⊆ F of formulas is consistent if and only if there
is no a formula A ∈ F such that
Inconsistent set
A set ∆ ⊆ F is inconsistent if and only if there is a formula A ∈ F such
that ∆ ` A and ∆ ` ¬A.
36
Lemma 5 (Consistency Condition)
For every set ∆ ⊆ F of formulas, the following conditions are equivalent:
(i) ∆ is consistent,
(ii) there is a formula A ∈ F such that ∆ 6` A.
Proof The implications: (i) implies (ii) and vice-versa are proved by showing
the corresponding opposite implications. I.e. to establish the equivalence of (i)
and (ii), we first show that not (ii) implies not (i), and then that not (i)
implies not (ii).
Case 1
Assume that not (ii). It means that for all formulas A ∈ F we have that
∆ ` A. In particular it is true for a certain A = B and A = ¬B and
hence proves that ∆ is inconsistent, i.e. not (i) holds.
Case 2
Assume that not (i), i.e that ∆ is inconsistent. Then there is a formula
A such that ∆ ` A and ∆ ` ¬A. Let B be any formula. Since
(¬A ⇒ (A ⇒ B)) is provable in H2 by Lemma 3, hence by applying
Modus Ponens twice and by detaching from it ¬A first, and A next, we
obtain a formal proof of B from the set ∆, so that ∆ ` B for any formula
B. Thus not (ii).
(i) ∆ is inconsistent,
(ii) for all formulas A ∈ F, ∆ ` A.
Lemma 7
For every set ∆ of formulas and for every formula A ∈ F, ∆ ` A if and only
if there is a finite subset ∆0 ⊆ ∆ such that ∆0 ` A.
Proof
If ∆0 ` A for a certain ∆0 ⊆ ∆, then by the monotonicity of the consequence,
also ∆ ` A. Assume now that ∆ ` A and let A1 , A2 , ..., An be a formal
37
proof of A from ∆. Let ∆0 = {A1 , A2 , ..., An } ∩ ∆. Obviously, ∆0 is finite and
A1 , A2 , ..., An is a formal proof of A from ∆0 .
Proof
If ∆ is inconsistent, then for some formula A, ∆ ` A and ∆ ` ¬A. By above
Lemma 7, there are finite subsets ∆1 and ∆2 of ∆ such that ∆1 ` A and
∆2 ` ¬A. By monotonicity, the union ∆1 ∪ ∆2 is a finite subset of ∆, such
that ∆1 ∪ ∆2 ` A and ∆1 ∪ ∆2 ` ¬A. Hence ∆1 ∪ ∆2 is a finite inconsistent
subset of ∆. The second implication is the opposite to the one just proved and
hence also holds.
The following lemma links the notion of non-provability and consistency. It will
be used as an important step in our proof of the Completeness Theorem.
Lemma 8
For any formula A ∈ F, if 6` A, then the set {¬A} is consistent.
Proof
If {¬A} is inconsistent, then by the Inconsistency Condition 6 we have {¬A} `
A. This and the Deduction Theorem 3 imply ` (¬A ⇒ A). Applying the
Modus Ponens rule to ` (¬A ⇒ A) a formula ((¬A ⇒ A) ⇒ A), provable by
LemmaH2lemma, we get that ` A, contrary to the assumption of the lemma.
Complete and Incomplete Sets
Another important notion, is that of a complete set of formulas. Complete sets,
as defined here are sometimes called maximal, but we use the first name for
them. They are defined as follows.
Complete set
A set ∆ of formulas is called complete if for every formula A ∈ F,
∆ ` A or ∆ ` ¬A. (55)
38
Lemma 9 (Complete set condition)
For every set ∆ ⊆ F of formulas, the following conditions are equivalent:
(i) ∆ is complete,
(ii) for every formula A ∈ F, if ∆ 6` A, then the set ∆ ∪ {A} is inconsistent.
Proof
We consider two cases. We show that (i) implies (ii) and vice-versa, that (ii)
also implies (i).
Case 1
Assume that (i) and that for every formula A ∈ F, ∆ 6` A, we have to
show that in this case ∆ ∪ {A} is inconsistent. But if ∆ 6` A, then from
the definition of complete set and assumption that ∆ is complete set, we
get that ∆ ` ¬A. By the monotonicity of the consequence we have that
∆ ∪ {A} ` ¬A as well. Since, by formula ?? we have ` (A ⇒ A), by
monotonicity ∆ ` (A ⇒ A) and by Deduction Theorem ∆ ∪ {A} ` A.
This proves that ∆ ∪ {A} is inconsistent. Hence (ii) holds.
Case 2
Assume that (ii). Let A be any formula. We want to show that the
condition: ∆ ` A or ∆ ` ¬A is satisfied. If ∆ ` ¬A, then the
condition is obviously satisfied.
If, on other hand, ∆ 6` ¬A, then we are going to show now that it must
be , under the assumption of (ii), that ∆ ` A, i.e. that (i) holds.
Assume that ∆ 6` ¬A, then by (ii), the set ∆ ∪ {¬A} is inconsistent.
It means, by the Consistency Condition 5, that ∆ ∪ {¬A} ` A. By
the Deduction Theorem 3, this implies that ∆ ` (¬A ⇒ A). Since
((¬A ⇒ A) ⇒ A) is provable in H2 (Lemma 3), by monotonicity ∆ `
((¬A ⇒ A) ⇒ A). Detaching (¬A ⇒ A), we obtain that ∆ ` A, what
ends the proof that (i) holds.
Incomplete set
A set ∆ of formulas is called incomplete if it is not complete, i.e. if there
exists a formula A ∈ F such that
∆ 6` A and ∆ 6` ¬A. (56)
39
(i) ∆ is incomplete,
(ii) there is formula A ∈ F such that ∆ 6` A and the set ∆ ∪ {A} is consistent.
Extensions
A set ∆∗ of formulas is called an extension of a set ∆ of formulas if the
following condition holds.
{A ∈ F : ∆ ` A} ⊆ {A ∈ F : ∆∗ ` A}. (57)
Proof
Assume that the lemma does not hold, i.e. that there is a consistent set ∆,
such that all its consistent extensions are not complete. In particular, as ∆ is
an consistent extension of itself, we have that ∆ is not complete.
The proof consists of a construction of a particular set ∆∗ and proving that it
forms a complete consistent extension of ∆, contrary to the assumption that all
its consistent extensions are not complete.
Construction of ∆∗ .
As we know, the set F of all formulas is enumerable. They can hence be put in
an infinite sequence
40
Initial Step
In this step we define the sets ∆1 , ∆2 and the formula B1 . We prove that
∆1 and ∆2 are consistent, incomplete extensions of ∆.
We take, as the first set, the set ∆, i.e. we define
∆1 = ∆. (59)
∆2 = ∆1 ∪ {B1 }. (60)
Inductive Step
Suppose that we have defined a sequence
∆1 , ∆2 , ..., ∆n
B1 , B2 , ...Bn−1
of formulas, for n ≥ 2.
Since ∆n is incomplete, it follows from the Incomplete Set Condition
Lemma 10, that there is a formula B ∈ F such that ∆n 6` B and
the set ∆n ∪ {B} is consistent.
Let
Bn
be the first formula with this property in the sequence (58) of all formulas.
We then define
41
By the definition, ∆ ⊆ ∆n ⊆ ∆n+1 and the set ∆n+1 is consistent. Hence ∆n+1
is an incomplete consistent extension of ∆.
By the principle of mathematical induction we have defined an infinite sequence
Definition of ∆∗
Now we are ready to define ∆∗ , i.e. we define:
[
∆∗ = ∆n . (64)
n∈N
To complete the proof our theorem we have now to prove that ∆∗ is a complete
consistent extension of ∆. Obviously, by the definition, ∆∗ is an extension of
∆. Now we prove (by contradiction) the following.
Fact 5
The set ∆∗ is consistent.
Proof
Assume that ∆∗ is inconsistent. By the Finite Inconsistency Theorem 7 there
is a finite subset ∆0 of ∆∗ that is inconsistent. By Definition 64 have that
[
∆0 = {C1 , ..., Cn } ⊆ ∆n .
n∈N
By the definition, Ci ∈ ∆ki for certain ∆ki in the sequence (62) and 1 ≤ i ≥ n.
Hence ∆0 ⊆ ∆m for m = max{k1 , k2 , ..kn }. But all sets of the sequence (62)
are consistent. This contradicts the fact that ∆m is inconsistent, as it contains
an inconsistent subset ∆0 . Hence ∆∗ must be consistent.
Fact 6
The set ∆∗ is complete.
42
Proof
Assume that ∆∗ is not complete. By the Incomplete Set Condition Lemma 10,
there is a formula B ∈ F such that ∆∗ 6` B and the set ∆∗ ∪ {B} is consistent.
But, by definition (64) of ∆∗ , the above condition means that for every n ∈ N ,
∆n 6` B holds and the set ∆n ∪ {B} is consistent.
Since the formula B is one of the formulas of the sequence (58) and it would
have to be one of the formulas of the sequence (63),
S i.e. B = Bj for certain
j. Since Bj ∈ ∆j+1 , it proves that B ∈ ∆∗ = n∈N . But this means that
∆∗ ` B, contrary to the assumption. This proves that ∆∗ is a complete
consistent extension of ∆ and ends the proof out our lemma.
Now we are ready to prove the completeness theorem for the system H2 .
if |= A, then ` A
¬A ∈ ∆∗ . (65)
Definition of v
43
We define the variable assignment
v : V AR −→ {T, F } (66)
as follows:
if ∆∗ ` a
T
v(a) =
F if ∆∗ ` ¬a.
Lemma 12 (Property of v)
Let v be the variable assignment defined by ( 66) and v ∗ its extension to the set
F of all formulas. Then for every formula B ∈ F,
T if ∆∗ ` B
∗
v (B) = (67)
F if ∆∗ ` ¬B.
Given the above property (67) of v (still to be proven), we prove that the v is
in fact, a counter model for any formula A, such that 6` A as follows. Let A be
such that 6` A. By ( 65), ¬A ∈ ∆∗ and obviously, ∆∗ ` ¬A. Hence, by the
property (67) of v, v ∗ (A) = F , what proves that v is a counter-model for A and
hence ends the proof of the completeness theorem. In order to really complete
the proof we still have to write a proof of the Lemma 12.
44
Assume now that ∆∗ ` ¬A. Then from the fact that ∆∗ is consistent it must
be that ∆∗ 6` A. I.e. ∆∗ 6` ¬C. If so, then ∆∗ ` C, as the set ∆∗ is
complete. Hence by the inductive assumption, that v ∗ (C) = T , and accordingly
Case A = (C ⇒ D). As in the previous case, we assume that the lemma, i.e.
the property (67) holds for the formulas C, D and we consider two possi-
bilities: ∆∗ ` A and ∆∗ ` ¬A.
v ∗ (C) = v ∗ (D) = T,
and accordingly
∆∗ 6` (C ⇒ D).
∆∗ ` ¬C.
45
But this is impossible, since the formula (¬C ⇒ (C ⇒ D)) is provable provable
(Lemma 4) and by monotonicity
∆∗ ` (¬C ⇒ (C ⇒ D)).
Lukasiewicz (1929)
where
A1 ((¬A ⇒ A) ⇒ A),
A2 (A ⇒ (¬A ⇒ B)),
A3 ((A ⇒ B) ⇒ ((B ⇒ C) ⇒ (A ⇒ C))))
for any A, B, C ∈ F.
2. Hilbert and Ackermann (1928)
where
A1 (¬(A ∪ A) ∪ A),
A2 (¬A ∪ (A ∪ B)),
A3 (¬(A ∪ B) ∪ (B ∪ A)),
A4 (¬(¬B ∪ C) ∪ (¬(A ∪ B) ∪ (A ∪ C))),
for any A, B, C ∈ F.
Modus Ponens rule in the language L{¬,∪} has a form
A ; (¬A ∪ B)
(M P ) .
B
46
Theorem 8 (Deduction Theorem for HA)
For any subset Γ of the set of formulas F of HA and for any formulas A, B ∈ F,
In particular,
A `HA B if and only if `HA (¬A ∪ B).
2. Hilbert (1928)
where
A1 (A ⇒ A),
A2 (A ⇒ (B ⇒ A)),
A3 ((A ⇒ B) ⇒ ((B ⇒ C) ⇒ (A ⇒ C))),
A4 ((A ⇒ (A ⇒ B)) ⇒ (A ⇒ B)),
A5 ((A ⇒ (B ⇒ C)) ⇒ (B ⇒ (A ⇒ C))),
A6 ((A ⇒ B) ⇒ ((B ⇒ C) ⇒ (A ⇒ C))),
A7 ((A ∩ B) ⇒ A),
A8 ((A ∩ B) ⇒ B),
A9 ((A ⇒ B) ⇒ ((A ⇒ C) ⇒ (A ⇒ (B ∩ C))),
A10 (A ⇒ (A ∪ B)),
A11 (B ⇒ (A ∪ B)),
A12 ((A ⇒ C) ⇒ ((B ⇒ C) ⇒ ((A ∪ B) ⇒ C))),
A13 ((A ⇒ B) ⇒ ((A ⇒ ¬B) ⇒ ¬A)),
A14 (¬A ⇒ (A ⇒ B)),
A15 (A ∪ ¬A),
for any A, B, C ∈ F.
Kleene (1952)
where
A1 (A ⇒ (B ⇒ A)),
A2 ((A ⇒ (B ⇒ C)) ⇒ (B ⇒ (A ⇒ C))),
47
A3 ((A ∩ B) ⇒ A),
A4 ((A ∩ B) ⇒ B),
A5 (A ⇒ (B ⇒ (A ∩ B))),
A6 (A ⇒ (A ∪ B)),
A7 (B ⇒ (A ∪ B)),
A8 ((A ⇒ C) ⇒ ((B ⇒ C) ⇒ ((A ∪ B) ⇒ C))),
A9 ((A ⇒ B) ⇒ ((A ⇒ ¬B) ⇒ ¬A)),
A10 (¬¬A ⇒ A)
for any A, B, C ∈ F.
Rasiowa-Sikorski (1950)
Here is the shortest axiomatization for the language L{¬, ⇒} . It contains just
one axiom.
Meredith (1953)
L = ( L{¬, ⇒} , F, A1 M P ), (73)
where
48
A1 ((((((A ⇒ B) ⇒ (¬C ⇒ ¬D)) ⇒ C) ⇒ E)) ⇒ ((E ⇒ A) ⇒ (D ⇒ A))).
5 Exercises
Here are few exercises designed to help the readers with understanding the
notions of completeness, monotonicity of the consequence operation, the role of
the deduction theorem and importance of some basic tautologies.
Let S be any Hilbert proof system
A, (A ⇒ B)
S = (L{∩,∪,⇒,¬} , F, LA, (M P ) ) (75)
B
with its set LA of logical axioms such that S is complete under classical se-
mantics.
Let X ⊆ F be any subset of the set F of formulas of the language L{∩,∪,⇒,¬}
of S. We define, as we did in chapter 4, a set Cn(X) of all consequences of
the set X as
Cn(X) = {A ∈ F : X `S A}. (76)
Plainly speaking, the set Cn(X) of all consequences of the set X is the set of
all formulas that can be proved in S from the set (LA ∪ X).
Exercise 9
1. Prove that for any subsets X, Y of the set F of formulas the following mono-
tonicity property holds.
If X ⊆ Y , then Cn(X) ⊆ Cn(Y ). (77)
49
2. Do we need the completeness of S to prove that the monotonicity property
holds for S?
Solution
1. Let A ∈ F be any formula such that A ∈ Cn(X). By (76), we have that
X `S A. This means that A has a formal proof from the set X ∪ LA. But
X ⊆ Y , hence this proof is also a proof from Y ∪ LA, i.e . Y `S A, and hence
A ∈ Cn(Y ). This proves that Cn(X) ⊆ Cn(Y ).
Exercise 10
Prove that for any set X ⊆ F, the set T ⊆ F of all propositional classical
tautologies of the language L{∩,∪,⇒,¬} of the system S is a subset of Cn(X),
i.e. prove that
T ⊆ Cn(X). (78)
2. Do we need the completeness of S to prove that the property (78) holds for
S?
Solution
1. The proof system S is complete, so by the completeness theorem we have
that
T = {∈ F : `S A}. (79)
By definition (76) of the consequence,
{A ∈ F : `S A} = Cn(∅)
Exercise 11
Prove that for any formulas A, B ∈ F, and for any set X ⊆ F,
50
1. Proof of the implication:
X `S (A ∩ B). (81)
By the assumption (81) we have that X `S (A∩B), by (83), X`S ((A∩B) ⇒ A),
and so we get X `S A by Modus Ponens.
Similarly, X `S (A∩B), by the assumption (81), X`S ((A∩B) ⇒ B) by by (??),
and so we get X `S B by MP. This proves that A ∈ Cn(X) and B ∈ Cn(X)
and ends the proof of the implication 1.
X `S A, and X `S B. (84)
X `S (A ⇒ (B ⇒ (A ∩ B))). (85)
Exercise 12
Let S be the proof system (75). Prove that the Deduction Theorem holds for S,
i.e. prove the following.
For any subset Γ of the set of formulas F of S and for any formulas A, B ∈ F,
51
Solution
The formulas A1 = (A ⇒ (B ⇒ A)) and A2 = ((A ⇒ (B ⇒ C)) ⇒ ((A ⇒
B) ⇒ (A ⇒ C))) are basic propositional tautologies. By the completeness of S
we have that
`S (A ⇒ (B ⇒ A)) and `S ((A ⇒ (B ⇒ C)) ⇒ ((A ⇒ B) ⇒ (A ⇒ C))).
(87)
The formulas A1, A2 are axioms of the Hilbert system H1 defined by (1). By
(87) both axioms A1, A2 of H1 are provable in S. These axioms were sufficient
for the proof of the Deduction Theorem 3 for H1 and its proof now can be
repeated for the system S.
Exercise 13
Prove that for any A, B ∈ F,
Cn({A, B}) = Cn({(A ∩ B)})
52
Applying Modus Ponens to the assumption (88) and (89) we get `S (A ⇒ (B ⇒
C)) what ends the proof.
6 Homework Problems
Completeness Proof Two Problems
4. Repeat the same for the language L{¬,⇒,∩,∪} , i.e. extends systems S1 or
S2 to a complete proof system S3 based on the language L{¬,⇒,∩,∪} .
5. Conduct appropriate version of Proof Two of the Completeness Theorem
6 for the system S3 from the previous problem.
Axiomatizations Problems
53
2. Let H be Hilbert proof system (??).
(i) Prove `HA ((A ⇒ (B ⇒ C)) ⇒ ((A ⇒ B) ⇒ (A ⇒ C))), for any
A, B, C ∈ F.
(ii) Prove Deduction Theorem for H.
(ii) Prove Completeness Theorem for H.
F = ( F, ∪, ∩, ⇒, ¬ ), (92)
54
2. The algebra LT = ( F/ ≈, ∪, ∩, ⇒, ¬), where the operations ∪, ∩, ⇒
and ¬ are determined by the congruence relation (94) i.e.
3. Formulate and prove the Deduction Theorem for Hilbert and Ackermann
system (69).
4. Formulate and prove the Deduction Theorem for Lukasiewicz system (68).
5. Formulate and prove the Deduction Theorem Kleene system (71).
55