\input fremtex
\smallprint
\def\ttablerule{\ \vrule\ }
\font\twelverm=cmr12
\def\lectureend#1{\discrversionA{\hfill{\twelverm #1}}{}}
\def\omitted#1{\discrversionA{\hfill{\twelverm omitted #1}}{}}
\def\ttlbr{\char'173}
\def\ttrbr{\char'175}
\def\notP{\hbox{$\hskip 0.3em plus 0pt minus 0pt\not\hskip-0.3em plus0pt
minus 0pt P\,$}}
\def\notR{\hbox{$\hskip 0.3em plus 0pt minus 0pt\not\hskip-0.3em plus0pt
minus 0pt R\,$}}
\def\ttT{{\tt T}}
\filename{cc1121.tex}
\versiondate{26.1.01}
\oldfootnote{}{These notes are distributed to students on the understanding that they have not been properly checked for errors. If you spot a mistake, tell the author!}
\Centerline{\bf CC112 Discrete Mathematics}
\Centerline{\smc D.H.Fremlin}
\medskip
This course is an introduction to formal logic; I will try to explain how to translate verbal arguments into `argument forms', and then to test their validity. It is based on the first three chapters of
\Centerline{S.S.Epp, {\it Discrete Mathematics with Applications}, Brookes/Cole; QA 39.}
\bigskip
\noindent{\bf A case study: {\tt QuickSort}}
\medskip
In preparation for the ideas of this course I will describe an
interesting and important algorithm. {\it Sorting} a list is one of the
most fundamental operations of computation. The algorithm {\tt
quicksort} is short (at least in a language like C which allows
recursion), exceedingly economical in space requirements, and fast on
`most' lists.
Here is a version of it:
\medskip
\inset{\tt public class QuickSort
\medskip
\ttlbr static boolean goodorder(int i,int j,double num[])
\quad \ttlbr return( num[i] <= num[j]);\ttrbr
\medskip
static int partition(int left,int right,int qslist[],double num[])
\quad \ttlbr int pivot = qslist[left]; int lo = left; int hi = right;
\qquad while( lo < hi )
\qquad\quad \ttlbr if( goodorder(pivot,qslist[hi],num)) --hi;
\qquad\qquad else\ttlbr qslist[lo]=qslist[hi]; ++lo; qslist[hi]=qslist[lo];\ttrbr \ttrbr
\qquad qslist[lo] = pivot;
\qquad return(lo);\ttrbr
\medskip
static void quicksort(int left,int right,int qslist[],double num[])
\quad \ttlbr if(left < right)
\qquad \ttlbr int dp = partition(left,right,qslist,num);
\qquad quicksort(left,dp - 1,qslist,num);
\qquad quicksort(dp + 1,right,qslist,num);\ttrbr \ttrbr
\medskip
public static void main(String[] args)
\quad \ttlbr double num[] =
\ttlbr{}0.53,0.12,0.98,0.32,0.18,0.72,0.80,0.46,0.32,0.21\ttrbr ;
\quad int qslist[] = new int[num.length];
\quad for(int i = 0; i < num.length; ++i) qslist[i] = i;
\quad quicksort(0,num.length - 1,qslist,num);
\quad for(int i = 0;i < num.length;++i)
\qquad System.out.print(qslist[i]+" "); System.out.println();\ttrbr
\ttrbr
}%end of inset
\medskip
Now let's try to understand what's being done here. We are going to try to put
numbers in a `correct' order, as defined by the function {\tt
goodorder}; to make it easy without being trivial, I define an array
{\tt num[]} and say that $i$ and $j$ are to be counted as being in
correct order if {\tt num[$i$]} is less than or equal to {\tt num[$j$]}.
\inset{\tt \ttlbr static boolean goodorder(int i,int j,double num[])
\quad \ttlbr return( num[i] <= num[j]);\ttrbr
}
\noindent Since
\inset{{\tt num[1]}$=0{\cdot}12$, {\tt num[4]}$=0{\cdot}18,\ldots,$ {\tt
num[2]}$=0{\cdot}98$,}
\noindent the output list will be
\Centerline{$1\quad 4\quad 9\quad 3\quad 8\quad 7\quad 0\quad 5\quad 6\quad 2$.}
The main program is
\inset{\tt public static void main(String[] args)
\quad \ttlbr double num[] =
\ttlbr{}0.53,0.12,0.98,0.32,0.18,0.72,0.80,0.46,0.32,0.21\ttrbr ;
\quad int qslist[] = new int[num.length];
\quad for(int i = 0; i < num.length; ++i) qslist[i] = i;
\quad quicksort(0,num.length - 1,qslist,num);
\quad for(int i = 0;i < num.length;++i)
\qquad System.out.print(qslist[i]+" "); System.out.println();\ttrbr
\ttrbr}
\noindent All this does is to set up the array {\tt num[]}, initialize {\tt qslist[]} with {\tt qslist[$i$]}$=i$ for all relevant $i$, call
{\tt quicksort}, and print out the final version of {\tt qslist[]}.
Now for the function {\tt quicksort}.
\inset{\tt static void quicksort(int left,int right,int qslist[],double num[])
\quad \ttlbr if(left < right)
\qquad \ttlbr int dp = partition(left,right,qslist,num);
\qquad quicksort(left,dp - 1,qslist,num);
\qquad quicksort(dp + 1,right,qslist,num);\ttrbr \ttrbr
}
\noindent What is this doing? It is calling on the function {\tt
partition}. We shall see shortly that {\tt partition} does two things.
It {\it returns} an integer value {\tt dp}, but it also changes {\tt
qslist[]}.
This is the key to the whole program. The integer parameters {\tt left}
and {\tt right} refer to the segment of {\tt qslist[]} which is being
dealt with. (You can see from the main program above that on the first
call to {\tt partition}, we shall have {\tt left} $=0$ and {\tt
right} $=9$, the two ends of the array {\tt qslist[]}.) The idea of {\tt
partition} is that, at the end of the call, {\tt
qslist[left]}$\ldots${\tt qslist[dp$-1$]} will carry numbers which
will {\it precede} {\tt qslist[dp]} in the final list, while {\tt
qslist[dp$+1$]}$\ldots${\tt qslist[right]} will carry numbers which
will {\it follow} {\tt qslist[dp]} in the final list. If we can get
this right, then we now only have to sort the left and right sections to
get the whole list done; that's the last two lines in {\tt quicksort}.
So now look at {\tt partition}:
\inset{\tt static int partition(int left,int right,int qslist[],double num[])
\quad \ttlbr int pivot = qslist[left]; int lo = left; int hi = right;
\qquad while( lo < hi )
\qquad\quad \ttlbr if( goodorder(pivot,qslist[hi],num)) --hi;
\qquad\qquad else\ttlbr qslist[lo]=qslist[hi]; ++lo; qslist[hi]=qslist[lo];\ttrbr \ttrbr
\qquad qslist[lo] = pivot;
\qquad return(lo);\ttrbr
}%end of inset
\noindent The idea here is that we start by picking, not the returned
value {\tt dp} of {\tt partition}, but the number {\tt pivot} which will
eventually appear as {\tt qslist[dp]}. For simplicity, the program
written here takes {\tt pivot = qslist[left]}, the number at the bottom
end of the segment being sorted.
(If you ever want to write a version of {\tt quicksort} for general use, you should try
to be a little more intelligent at this point.) The algorithm now sets
out to shuffle the numbers in the segment
{\tt qslist[left]}$\ldots${\tt qslist[right]} in such a way that each
comes out on the correct side of {\tt pivot}, which we rather expect to
finish up somewhere in the middle. The procedure is to `squeeze' the
list between the limits {\tt lo} and {\tt hi}, starting with the whole
segment {\tt left}, {\tt right} under consideration. The economy of the
algorithm derives from the fact that only one number needs to be moved
at a time. At any moment, all the numbers in
{\tt qslist[left]}$\ldots${\tt qslist[lo$-1$]} will be destined to end
on the left of {\tt pivot}; the numbers in
{\tt qslist[hi$+1$]}$\ldots${\tt qslist[right]} will be destined to
end on the right of {\tt pivot}; the numbers in
{\tt qslist[lo$+1$]}$\ldots${\tt qslist[hi]} are still to be arranged;
and {\tt qslist[lo]} has some random number in it. When squeezing, we
look at one end of the section we are still working on -- it doesn't
matter much which -- this program looks at {\tt qslist[hi]}. If this is
correctly at the high end, that is, if
{\tt goodorder(pivot,qslist[hi],num)} $=1$, then we just reduce {\tt hi} by
1. Otherwise, we have to put {\tt qslist[hi]} at the bottom of the
segment, in the spare location {\tt qslist[lo]}; increase {\tt lo} by
1; and shift the number in the new location {\tt qslist[lo]} to the
position {\tt hi} to be looked at next move, so that {\tt qslist[lo]} is
again spare.
(Have you noticed that I've been saying the whole time `a' correct order
not `the' correct order? The point is that if two of the numbers in
{\tt num[]} are actually the same, {\tt goodorder} will return {\tt T} whichever order they turn up in, and figuring out
which way round they will come in the final list will require a much
more complicated analysis! If you had an application in mind in which
it mattered -- e.g., if you wanted to list students in the order of
their marks in a test, and to put students with the same marks in
alphabetical order -- you would have to look carefully at {\tt
goodorder} and make sure it did the right thing.)
\medskip
Now what would a `proof' that this program works look like? What we
want is an argument which we can rely on {\it without running the
program}. Of course in practice all programmers rely heavily on
testing their programs, running them with a variety of inputs and
checking that the outputs are always what the customer would want. The
trouble is that for any non-trivial program there are too many possible
inputs. You could test a program like the one above indefinitely
without finding out that if (for instance) the length of the list is a
multiple of 113 the program fails $\ldots$ (Remember the Pentium chip
which gave incorrect answers to occasional double-precision divisions.)
So it would be very nice if we could develop techniques for checking the
{\it logic} of programs.
The discussion I have already written down is an attempt at such a check
for the version of {\tt quicksort} here. Like any human logic, it is
full of jumps and assumptions. One of the aims of this course is to
look at the structure of logical arguments with a view to {\it
formalizing} them; that is, putting them in a language which could, in
principle, be accessible to a computer to check that they're correct.
(Of course, you would need a good deal of faith in your checking program
$\ldots$ but that's another story.)
At the end of last term, we spent quite a lot of time on `proofs by
induction', and these are certainly one of the basic elements of any
proof that a program is correct. For instance, in the function {\tt
quicksort} above, if we believe that {\tt partition} is doing its job,
then we can argue as follows:
\inset{Let P($n$) be the statement
\inset{`if {\tt right}$-${\tt left} $=n$ then {\tt quicksort} will put
the segment {\tt qslist[left]}\discretionary{}{}{}$\ldots${\tt qslist[right]} into a correct order.'}
{\bf Base step} If $n=0$ then {\tt right} $=$ {\tt left} so
{\tt quicksort} does nothing, which is the right thing to do.
{\bf Inductive step} Assuming that P($i$) is true whenever $0\le i2$, $n$ is odd.'}
\noindent This is an `if $\ldots$ then $\ldots$' with an `if $\ldots$ then $\ldots$' nested inside. If we take
\Centerline{$p$ to represent the statement `$n$ is prime',}
\Centerline{$q$ to represent the statement `$n>2$',}
\Centerline{$r$ to represent the statement `$n$ is odd'}
\noindent then our original statement becomes
\Centerline{$p\to(q\to r)$.}
\noindent Now we can replace either implication by its contrapositive, so we get
\Centerline{$p\to((\neg r)\to(\neg q))$,
\quad$(\neg(q\to r))\to(\neg p)$,
\quad$(\neg((\neg r)\to(\neg q)))\to(\neg p)$}
\noindent as statements logically equivalent to the first. In words, the first of these becomes
\inset{`If $n$ is prime, then, if $n$ is even, $n\le 2$'}
\noindent (I am translating `$\neg$($n$ is odd)' into `$n$ is even', and `$\neg(n>2)$' into `$n\le 2$'.)
Ordinary English can't really cope with `$\neg(q\to r)$, but of course this has another form, if we remember that `$q\to r$' is an abbreviation for `$r\vee(\neg q)$', because then we get
\Centerline{$\neg(q\to r)\equiv\neg(r\vee(\neg q))
\equiv(\neg r)\wedge(\neg(\neg q))\equiv(\neg r)\wedge q$.}
\noindent So `$(\neg(q\to r))\to(\neg p)$' becomes
`$((\neg r)\wedge q)\to(\neg p)$', which translates into
\inset{`If $n$ is even and $n>2$, then $n$ is not prime'.}
\medskip
\noindent{\bf Bi-implication} A further connective of great
importance is $\leftrightarrow$, {\bf bi-implication} or
{\bf biconditional}, defined by the truth table
$$\text{\it Bi-implication}\quad
\vbox{\offinterlineskip
\halign{\hfil#\hfil&\ttablerule\strut\hfil#\hfil
&\ttablerule\hfil#\hfil\cr
$p$&$q$&$p\leftrightarrow q$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}\cr
{\tt T}&{\tt F}&{\tt F}\cr
{\tt F}&{\tt T}&{\tt F}\cr
{\tt F}&{\tt F}&{\tt T}\cr}}$$
\noindent that is, $p\leftrightarrow q$ is {\tt T} if $p$ and $q$
have the {\it same} truth value, whether {\tt T} or {\tt F}, and
$p\leftrightarrow q$ is {\tt F} if $p$ and $q$ have
{\it different} truth values. It's called
`bi-implication' because it's logically equivalent to
$(p\rightarrow q)\wedge(q\rightarrow p)$:
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil\cr
$p$&$q$&$p\leftrightarrow q$&$p\rightarrow q$&$q\rightarrow
p$&$(p\rightarrow q)\wedge(q\rightarrow p)$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt T}&{\tt F}&{\tt F}&{\tt F}&{\tt T}&{\tt F}\cr
{\tt F}&{\tt T}&{\tt F}&{\tt T}&{\tt F}&{\tt F}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}\cr}}$$
\noindent The idea is that $\leftrightarrow$ should
correspond to the mathematicians' `if and only if'; `$2x$
and $2y$ are equal if, and only if, $x$ and $y$ are equal'
becomes `$(2x=2y)\leftrightarrow(x=y)$'.
Since
\Centerline{$p\leftrightarrow q\equiv(p\rightarrow
q)\wedge(q\rightarrow p)$,}
\noindent and since both $p\rightarrow q$ and $q\rightarrow
p$ are logically equivalent to their contrapositives,
we get
$$\eqalign{p\leftrightarrow q
&\equiv(p\rightarrow q)\wedge(q\rightarrow p)\cr
&\equiv((\neg q)\rightarrow(\neg p))\wedge((\neg
p)\rightarrow(\neg q))\cr
&\equiv(\neg p)\leftrightarrow(\neg q).\cr}$$
\noindent So, for instance, the sentence
\Centerline{a prime is odd if and only if it is greater than
or equal to 3}
\noindent is logically equivalent to the sentence
\Centerline{a prime is even if and only if it is less than
3.}
\medskip
\noindent{\bf Tautologies}
Consider the expression
\Centerline{$p\vee(p\rightarrow q)$.}
\noindent If we draw up a truth table for this, we get
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil\cr
$p$&$q$&$p\rightarrow q$&$p\vee(p\rightarrow q)$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt T}&{\tt F}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt T}\cr}}$$
\noindent with a column of T's. So $p\vee(p\rightarrow q)$
is {\it always} true, regardless of what $p$ and $q$ are.
(This is peculiar. Try an example: take $p$ to be `it's
Tuesday' and $q$ to be `we're in Belgium'. Then
$p\vee(p\rightarrow q)$ becomes `{\it either} it's Tuesday
{\it or} if it's Tuesday we're in Belgium'. Most of us
avoid saying things like this. But remember that our
ultimate aim is to teach a computer to sort out our
mistakes, and it's got to be able to cope with silly
sentences as well as sensible ones.) Anyway, sentences
(sensible or not) which are always true, in this sense, are
called {\bf tautologies}. We've already had a couple.
The simplest, in fact, is $\top$, the `empty tautology',
which is always {\tt T} because we say it is. Another is
$p\vee(\neg p)$ (or $q\vee(\neg q)$, of course).
Now suppose we take a logically equivalent pair of
expressions -- e.g., $p\vee(q\wedge r)$ and $(p\vee
q)\wedge(q\vee r)$ -- and consider the compound
$(p\vee(q\wedge r))\leftrightarrow((p\vee q)\wedge(q\vee
r))$. Then we get a table
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil\cr
$p$&$q$&$r$&$p\vee(q\wedge r)$&$(p\vee q)\wedge(p\vee
r)$&$(p\vee(q\wedge r))\leftrightarrow((p\vee q)\wedge(q\vee
r))$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt T}&{\tt T}&{\tt F}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt T}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt T}&{\tt F}&{\tt F}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt F}&{\tt T}&{\tt F}&{\tt F}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt F}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt F}&{\tt F}&{\tt F}&{\tt T}\cr
}}$$
\noindent Since the columns for $p\vee(q\wedge r)$ and
$(p\vee q)\wedge(p\vee r)$ match exactly, and since the bi-
implication
$(p\vee(q\wedge r))\leftrightarrow((p\vee q)\wedge(q\vee
r))$ is {\tt T} exactly when the two halves match, it's always T.
This is a general rule: two expressions $\phi$ and $\psi$
are logically equivalent exactly when
$\phi\leftrightarrow\psi$ is a tautology.
\lectureend{99/5, 00/5}
\bigskip
\filename{cc1124.tex}
\versiondate{26.1.01}
\medskip
\noindent{\bf Natural Deduction}
\medskip
Let me return for a moment to the question: what is this course for?
We are trying to develop methods for proving that programs work.
(Actually you need
exactly the same ideas to prove that chips are correctly designed. The
only difference is that it's harder to debug a chip than a program, so
it's even more
important to get it right first time.) The statement that `a program
works' can always be expressed as a compound expression with terms like
`if, at the
$n$th step, {\tt lo} $<$ {\tt high} then, at the $(n+1)$st step, {\tt
lo} $\le$
{\tt hi}'. Each of these we can hope to check, one by one, and then,
with a mixture of the manipulations treated so far, some more to come
later this term,
and a good few proofs by induction, assemble them into a truth table
with a nice
big {\tt T} at the appropriate place in the column for `the program works'.
The problem is that it's going to be rather a big truth table.
Remember that if we
have $k$ `atomic' propositions $p_1,\ldots,p_k$ to work with, then we
shall have
$2^k$ lines in the table. Since any ordinary C program is going to
have at
least one proposition corresponding to each statement in the program,
any program with enough lines in it to be a problem is going to lead to
truth tables
which are completely unmanageable, even by machine. I haven't given
you any truth table with more than 16 lines in it to do by hand. With
a fast PC you
could imagine going up to $2^{20}$ lines, corresponding to 20 atomic
propositions; but $2^{100}$ lines isn't on, and never will be; and we
are already using programs with a million statements in them. So we
have to find
methods other than brute force. And this is where logical argument
comes in.
I suppose people have been arguing logically ever since there have been
people. But the systematic arrangement of logical arguments in such a
way that they can
be properly checked was one of the inventions of the ancient Greeks
sometime between 500 and 300 {\smc b.c.} They didn't use the formal
language which we
have now (that's mostly a twentieth-century development) but all the
essential ideas are there. You divide the problem into little bits,
take them one at a
time and make sure they don't get up afterwards. Most of the little
bits will be of the form
\Centerline{if $p$ then $q$}
\noindent where $p$ and $q$ are compound propositions of some kind.
Typically $p$ is of the form $p_1\wedge p_2\wedge\ldots\wedge p_k$, so
that we are looking at expressions of the form
\Centerline{$(p_1\wedge\ldots\wedge p_k)\rightarrow q$.}
\noindent We slowly build up a list of these and then bring them
together to form the theorems we are trying to prove.
A logical argument, therefore, will be a vertical list
\smallskip
$$\hskip100pt\vbox{$p_1$ (premise)
$p_2$ (premise)
$\ldots$
therefore $\phi_1$ (intermediate assertion)
therefore $\phi_2$ (intermediate assertion)
$\ldots$
therefore $\psi$ (conclusion)}$$
\smallskip
\noindent in which the first few sentences $p_1,\ldots$ will be {\bf
premises}, things we believe we can check directly from our knowledge of
the world (`the sun rises in the east', `this computer has $2^{20}$
bytes of RAM'), and $\phi_1,\ldots$ will be things
we can prove from these, culminating in the final assertion $\psi$ we
were trying to prove. The fundamental rule will be that every line,
after the
initial list of premises, will be deducible from the lines {\it above}
it, without reference to the lines {\it below}.
Now what does it mean to say that a proposition is {\it deducible} from
some string of propositions? The idea is that $\phi$ should be
deducible from
$\phi_1,\ldots,\phi_n$ if $\phi$ is true whenever $\phi_1,\ldots,\phi_n$
are. We'd better have some examples.
\medskip
\noindent {\bf example 1} Consider the argument
\smallskip
\hskip100pt\vbox{If Socrates is a man, he is mortal.
Socrates is a man.
Therefore Socrates is mortal.}
\smallskip
\noindent The form of this is
\smallskip
\hskip100pt\vbox{$p\rightarrow q$
$p$
therefore $q$,}
\smallskip
\noindent where $p$ stands for `Socrates is a man' and $q$ stands for `Socrates is mortal'. Now look at the truth table for $\rightarrow$:
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$p\rightarrow q$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt T}&{\tt F}&{\tt F}\cr
{\tt F}&{\tt T}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt T}\cr}}$$
\noindent We are supposing that both $p$ and $p\rightarrow q$ are true.
If we look at the table, we see that only in the top row do $p$ and
$p\rightarrow q$ both have {\tt T}'s. If we delete any row in which either of the premises has an {\tt F}, we are left with only the top row, marked $<$ in the table. So this is the only row which can
happen in the real world. Since, in this row, $q$ also has a {\tt T}, $q$
must be true in the real world, and the argument is valid.
\medskip
\noindent {\bf example 2} Consider the argument
\smallskip
\hskip100pt\vbox{$2<3$.
$3<4$.
If $2<3$ and $3<4$ then $2<4$.
Therefore $2<4$.}
\smallskip
\noindent What is the structure of this argument? The first two lines
are simple assertions $p$ (`$2<3$') and $q$ (`$3<4$'). The third line is a compound proposition in the
form `$(p\wedge q)\rightarrow r$', where $r$ stands for `$2<4$', and the final line is the assertion
$r$. If we
dress this up as a truth table, we get
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$r$&$p\wedge q$&$(p\wedge q)\rightarrow r$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt T}&{\tt T}&{\tt F}&{\tt T}&{\tt F}\cr
{\tt T}&{\tt F}&{\tt T}&{\tt F}&{\tt T}\cr
{\tt T}&{\tt F}&{\tt F}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt T}&{\tt T}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt T}&{\tt F}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt F}&{\tt F}&{\tt T}\cr
}}$$
\noindent The claim is that the final line is a consequence of the first
three, which we are supposed to believe for other reasons. Now the
first three lines of the argument correspond to the first, second and
fifth columns of the truth table. So we believe that the only relevant
lines of the truth table are those in which $p$, $q$ and $(p\wedge
q)\rightarrow r$ all have {\tt T}'s; that is, the top line. None of the
other lines can be relevant to the real world. And if we look at the top line,
then we see
a {\tt T} in the column for $r$; so, in the real world, $r$ must be true.
\medskip
\noindent{\bf example 3} Consider the argument
\inset{Either John is older than Peter, or they are the same age, or
Peter is older than John.
John and Peter are not the same age.
Therefore either John is older than Peter or Peter is older than John.}
\noindent Here the form of the argument is
\smallskip
\hskip100pt\vbox{$p\vee q\vee r$
$\neg q$
therefore $p\vee r$,}
\smallskip
\noindent where $p$ is `John is older than Peter', $q$ is `John is the same age as Peter' and $r$ is `Peter is older than John'.
(Remember that I allow myself to write $p\vee q\vee r$ to mean
either, or both, of $p\vee(q\vee r)$ or $(p\vee q)\vee r$.) Now the
corresponding truth table is
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$r$&$p\vee q\vee r$&$\neg q$&$p\vee r$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt F}&{\tt T}\cr
{\tt T}&{\tt T}&{\tt F}&{\tt T}&{\tt F}&{\tt T}\cr
{\tt T}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt T}&{\tt F}&{\tt F}&{\tt T}&{\tt T}&{\tt T}\cr
{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt F}&{\tt T}&$<$\cr
{\tt F}&{\tt T}&{\tt F}&{\tt T}&{\tt F}&{\tt F}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt F}&{\tt F}&{\tt F}&{\tt F}&{\tt T}&{\tt F}\cr}}$$
\noindent Since we are assuming that $\neg q$ and $p\vee q\vee r$ are
both {\tt T}, we need look only at the third, fourth and seventh lines; in
all of these, $p\vee r$ has a {\tt T}, so the argument is valid.
\lectureend{00/6}
\medskip
{\bf An invalid argument} We had better have an example of an argument which {\it isn't} valid. By far the most dangerous is the following type. Consider the argument
\inset{If he were clever, he'd be rich,
he isn't clever,
therefore he isn't rich.}
\noindent Writing $p$ for `he's clever' and $q$ for `he's rich', this becomes
\inset{$p\rightarrow q$
$\neg p$
therefore $\neg q$.}
\noindent In the truth table we have
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$p\rightarrow q$&$\neg p$&$\neg q$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt F}&{\tt F}\cr
{\tt T}&{\tt F}&{\tt F}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt F}$^*$&$<$\cr
{\tt F}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
}}$$
\noindent Looking at the premise `$p\rightarrow q$', we delete the second line; looking at the premise `$\neg p$', we delete the top two lines; so the bottom two lines are both allowed by the premises. But we see that in the third line the conclusion $\neg q$ is {\tt F}. So the argument is {\bf not} valid. If $p$ is {\tt F} and $q$ is {\tt T}, both the premises are {\tt T} but the conclusion is {\tt F}. In words: when we say `if he were clever, he'd be rich', we are saying nothing at all about what would happen if he isn't clever, and it is entirely possible that he is rich and stupid.
\bigskip
{\bf Valid arguments and true conclusions}
For a lecture or two now I've been going on about `validity' of arguments, or rather of argument forms. It's very important to remember that the validity of an argument is not the same thing as the truth of a conclusion. We can be lucky by accident in lots of ways. For example:
\smallskip
\hskip100pt\vbox{Roses are red,
violets are blue,
therefore cabbages are green.}
\smallskip
\noindent I hope this seems absurd. But the absurdity is not in my botany or my colour vision; it's in the shape of the argument, which is
\smallskip
\hskip100pt\vbox{$p$
$q$
therefore $r$.}
\smallskip
\noindent The conclusion simply isn't related to the premises, and as far as the logic goes I might just as well have said
\smallskip
\hskip100pt\vbox{There are seven days in the week,
there are twelve months in the year,
therefore the moon is made of green cheese.}
\smallskip
\noindent A {\it valid} argument from {\it true} premises must lead to a {\it true} conclusion; but {\it invalid} arguments can also blunder their way to true conclusions. Again, we can perfectly well have a valid argument leading to a false conclusion, if the premises are shaky:
\smallskip
\hskip100pt\vbox{if it's Friday today, it's Sunday tomorrow,
it's Friday today,
therefore it's Sunday tomorrow;}
\smallskip
\noindent the argument form is Modus Ponens and is absolutely sound, but the major premise is false, so the conclusion is unreliable.
\bigskip
{\bf Standard forms of argument} The truth-table method can cope, in principle, with arbitrarily complicated
arguments. We need to be able to formalize the sentences we're dealing
with in the language of the propositional calculus. Having done so,
all we have to do is write out the truth table, look at the lines in
which all the premises have {\tt T}'s, and check that the conclusion has a {\tt T} in each of these lines. In practice, this is tiresome to do by hand if
there are more than four or five atomic propositions or more than half a
dozen premises involved, and impractical by machine on an argument of
any real complexity. We have to learn to break arguments down into
standard forms which we can use more or less without thinking. So for
the next few lectures I will try to discuss some of these forms.
\medskip
\noindent{\bf Modus Ponens} This is the classic argument form
\smallskip
\hskip100pt\vbox{$p$
$p\rightarrow q$
therefore $q$}
\smallskip
\noindent used in example 1 above.
\medskip
\noindent{\bf Modus Tollens} A very similar one is
\smallskip
\hskip100pt\vbox{$p\rightarrow q$
$\neg q$
therefore $\neg p$.}
\smallskip
\noindent (For instance: `if he were clever, he'd be rich; he isn't rich; therefore he isn't clever'.) We can easily do this directly, from the table
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$p\rightarrow q$&$\neg q$&$\neg p$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt F}&{\tt F}\cr
{\tt T}&{\tt F}&{\tt F}&{\tt T}&{\tt F}\cr
{\tt F}&{\tt T}&{\tt T}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&$<$\cr}}$$
\noindent observing that the only row in which both $p\rightarrow q$
and $\neg q$ have {\tt T}'s is the bottom one, and in that row $\neg p$ has a
T. But we can also do it by putting in an extra stage:
\smallskip
\hskip100pt\vbox{$p\rightarrow q$
$\neg q$
therefore $(\neg q)\rightarrow(\neg p)$
therefore $\neg p$.}
\smallskip
\noindent In this form, we say first that, because $(\neg
q)\rightarrow(\neg p)$ is logically equivalent to $p\rightarrow q$, we
can deduce the third line from the first (since of course any line of
the truth table with a {\tt T} for $p\rightarrow q$ will also have a {\tt T} for
$(\neg q)\rightarrow(\neg p)$) and then deduce the bottom line from the
second and third, using Modus Ponens.
For this kind of compound argument, we {\it must} put in comments!
preferably in colour. Remember that the idea here is that people
should be able to check our arguments, and they haven't a hope if we
don't say what we're doing. So we might have something like
\halign{\qquad\hfil#&\quad\hfil#\hfil&\quad#\hfil\cr
1.&$p\rightarrow q$&(premise)\cr
2.&$\neg q$&(premise)\cr
3.&$(\neg q)\rightarrow(\neg p)$&logically equivalent to (1)\cr
4.&$\neg p$&Modus Ponens from (3) and (2).\cr}
\medskip
\noindent{\bf Disjunctive Addition}
These are the forms
\smallskip
\hskip100pt\vbox{$p$
therefore $p\vee q$}
\smallskip
\noindent and
\smallskip
\hskip100pt\vbox{$q$
therefore $p\vee q$.}
\smallskip
\noindent(`It's Friday, therefore it's either Friday or Saturday'.)
\lectureend{99/7}
\medskip
\noindent{\bf Conjunctive Simplification}
Again, this comes in two forms of the same idea:
\smallskip
\hskip100pt\vbox{$p\wedge q$
therefore $p$}
\smallskip
\noindent and
\smallskip
\hskip100pt\vbox{$p\wedge q$
therefore $q$.}
\smallskip
\noindent(`It's 10 o'clock on Friday, therefore it's 10 o'clock'). Note that in both of these we are in some sense discarding information.
If we replace `$p$' by `$p\vee q$', then we are forgetting that we know
that $p$ is true by itself. But quite often this is the way to apply
some fact that we've learnt without bothering to go into details.
Suppose we want to prove that
\Centerline{1999 isn't a leap year.}
\noindent Well, we have to ask someone to tell us about leap years, and
they might say `it's a bit complicated, but if the year ends in 1,
3, 5, 7 or 9 it isn't a leap year'. So then we can argue
\inset{1. 1999 ends in a 9 (premise)
2. (1999 ends in a 9)$\vee$(1999 ends in a 7)$\vee$(1999 ends in a
5)$\vee$(1999 ends in a 3)$\vee$(1999 ends in a 1) (disjunctive addition
from (1))
3. ((1999 ends in a 9)$\vee$(1999 ends in a 7)$\vee$(1999 ends in a
5)$\vee$(1999 ends in a 3)$\vee$(1999 ends in a 1))$\rightarrow$1999 is
not a leap year (premise)
4. 1999 isn't a leap year (Modus Ponens from (3) and (2)).}
\noindent Similarly, in the argument
\Centerline{It's Monday 15 February, so there's a CC112 lecture}
\noindent we might use the form
\inset{1. (It's Monday)$\wedge$(it's February 15) (premise),
2. (it's Monday)$\rightarrow$(there's a CC112 lecture) (premise),
3. therefore it's Monday (conjunctive simplification from (1)),
4. therefore there's a CC112 lecture (Modus Ponens from (2) and (3)).}
\noindent Human beings don't waste much time on this sort of thing
unless someone is ordering them to. But if you want a machine to
follow you, you have to explain points like this.
\medskip
\noindent{\bf Disjunctive Syllogism}
Yet another idea which takes two forms:
\smallskip
\hskip100pt\vbox{$p\vee q$
$\neg q$
therefore $p$}
\smallskip
\noindent and
\smallskip
\hskip100pt\vbox{$p\vee q$
$\neg p$
therefore $q$}
\smallskip
\noindent If you are told that there are two possibilities, and you know
that one of them can't happen, then you are left with the other.
\lectureend{00/7}
\medskip
\noindent{\bf Hypothetical Syllogism} Here we are coming to something
rather more substantial. The formal expression is
\smallskip
\hskip100pt\vbox{$p\rightarrow q$
$q\rightarrow r$
therefore $p\rightarrow r$.}
\smallskip
\noindent Let's check this from the truth table:
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$r$&$p\rightarrow q$&$q\rightarrow r$&$p\rightarrow r$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt T}&{\tt T}&{\tt F}&{\tt T}&{\tt F}&{\tt F}\cr
{\tt T}&{\tt F}&{\tt T}&{\tt F}&{\tt T}&{\tt T}\cr
{\tt T}&{\tt F}&{\tt F}&{\tt F}&{\tt T}&{\tt F}\cr
{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt F}&{\tt T}&{\tt F}&{\tt T}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt F}&{\tt F}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
}}$$
\noindent so that the rows in which both $p\rightarrow q$ and
$q\rightarrow r$ have {\tt T}'s are the first, fifth, seventh and eighth, in all of which $p\rightarrow r$ also has a {\tt T}.
This is one of the commonest forms of argument. It enables us to
assemble chains, as in
\inset{if {\tt goodorder} runs correctly, then {\tt partition} will
run correctly
if {\tt partition} runs correctly, then {\tt quicksort} will run
correctly
therefore, if {\tt goodorder} runs correctly, {\tt quicksort} will run
correctly.}
\noindent What is more, the chains can go on indefinitely; we can argue
in the form
\halign{\qquad\hfil#&\quad\hfil#\hfil&\quad#\hfil\cr
1.&$p\rightarrow q$&(premise)\cr
2.&$q\rightarrow r$&(premise)\cr
3.&$r\rightarrow s$&(premise)\cr
4.&$q\rightarrow s$&hypothetical syllogism from (2) and (3)\cr
5.&$p\rightarrow s$&hypothetical syllogism from (1) and (4).\cr}
\medskip
\noindent{\bf Conjunctive Addition} We can always argue as follows:
\smallskip
\hskip100pt\vbox{$p$
$q$
therefore $p\wedge q$.}
\smallskip
\noindent What this means (together with Conjunctive Simplification) is
that a list $p_1,\ldots,p_r$ of premises can always be interchanged with
a single premise $p_1\wedge p_2\wedge\ldots\wedge p_r$ on one line.
But the whole point of what we are doing at the moment is to break
arguments into manageable fragments, so we assemble propositions only
when we have some good reason to. For instance
\quad 1. It's February 1999 (premise).
\quad 2. If it's February 1999 and the date is divisible by 7, then it's a
Sunday (premise).
\quad 3. It's the 14th of the month (premise).
\quad 4. If it's the 14th of the month, the date is divisible by
7 (premise).
\quad 5. Therefore the date is divisible by 7 (Modus Ponens from (4) and (3)).
\quad 6. Therefore (it's February 1999)$\wedge$(the date is divisible by 7) (conjunctive addition of (1) and (5)).
\quad 7. Therefore it's a Sunday (Modus Ponens from (2) and (6)).
\noindent (Note how ignorant logicians pretend to be. The statement
`if it's the 14th of the month, the date is divisible by 7' has to be
brought in explicitly as a premise. Once again: we are devising
arguments which a machine can be taught to check. If we want to use
arithmetic facts, then we shall have to make sure the machine knows them
too.)
\medskip
\noindent{\bf Division into Cases}
Consider the form
\smallskip
\hskip100pt\vbox{$p\vee q$
$p\rightarrow r$
$q\rightarrow r$
therefore $r$.}
\smallskip
\noindent The truth table for this is
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$r$&$p\vee q$&$p\rightarrow r$&$q\rightarrow r$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt T}&{\tt T}&{\tt F}&{\tt T}&{\tt F}&{\tt F}\cr
{\tt T}&{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt T}&{\tt F}&{\tt F}&{\tt T}&{\tt F}&{\tt T}\cr
{\tt F}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt F}&{\tt T}&{\tt F}&{\tt T}&{\tt T}&{\tt F}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt F}&{\tt T}&{\tt T}\cr
{\tt F}&{\tt F}&{\tt F}&{\tt F}&{\tt T}&{\tt T}\cr
}}$$
\noindent Only the first, third and fifth lines are relevant, and on all
of these $r$ is {\tt T}.
We use this form of argument when we have two different lines we can
follow. In a chess game we might say
\inset{I have to move either my king or my pawn
if I move my king I shall lose
if I move my pawn I shall lose
therefore I am going to lose.}
\noindent{\bf Contradiction} or {\bf Reductio ad Absurdum} We shall have to return to this later, but in its simplest form it looks like this:
\smallskip
\hskip100pt\vbox{$(\neg p)\rightarrow\bot$
therefore $p$.}
\smallskip
\noindent (Remember that $\bot$ is the `false' proposition whose only truth value is {\tt F}.) The truth table looks like
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$\bot$&$\neg p$&$(\neg p)\rightarrow\bot$\cr
\noalign{\hrule}
{\tt T}&{\tt F}&{\tt F}&{\tt T}&$<$\cr
{\tt F}&{\tt F}&{\tt T}&{\tt F}\cr}}$$
\bigskip
The nine argument forms above (Modus Ponens, Modus Tollens, disjunctive addition, conjunctive simplification, conjunctive addition, disjunctive syllogism, hypothetical syllogism, division into cases, contradiction) can be thought of as `elementary', not in the sense that they are `easy' but in the sense that, like the chemical elements, they are components from which we can build more elaborate structures. There are plenty of other forms which are equally useful and have an equal right to be listed here.
The next thing it is worth doing is to list some further techniques which often come in useful.
\medskip
\noindent{\bf Tautology} We may introduce any tautology into an argument without regard to what has gone before. (Because a tautology is an expression which has truth value {\tt T} on {\it every} line; in particular, it has truth value {\tt T} on every line allowed by the premises; so it can be regarded as a consequence of {\it any} set of premises!) The most useful tautologies are actually the simplest: expressions of the form $\phi\vee(\neg\phi)$. This has to be {\tt T}, because if $\phi$ is {\tt T} then of course $\phi\vee(\neg\phi)$ is {\tt T}, while if $\phi$ is {\tt F} then $\neg\phi$ is {\tt T} so again $\phi\vee(\neg\phi)$ is {\tt T}. This is the principle of the {\bf excluded middle}.
A particularly important type of argument using this principle is the following:
\halign{\qquad\hfil#&\quad\hfil#&\quad#\hfil\cr
1.&$p\rightarrow r$&premise\cr
2.&$(\neg p)\rightarrow r$&premise\cr
3.&$p\vee(\neg p)$&excluded middle\cr
4.&$r$&division into cases, from (3), (1) and (2).\cr}
\noindent In the
chess-playing example above, we might say
\inset{If I move my king I shall lose
if I don't move my king I shall lose
therefore I am going to lose.}
\lectureend{00/8}
\medskip
\noindent{\bf Logical equivalences} If $\phi$ appears in an argument, and $\psi\equiv\phi$, then we may introduce $\psi$ at any later stage in the argument. (For $\psi$ and $\phi$ have exactly the same truth tables, so if $\phi$ has a {\tt T} in any line allowed by the premises, then $\psi$ also will.)
\lectureend{99/8}
\medskip
\noindent{\bf Substitution} If we have a valid argument with atomic propositions $p_1,\ldots,p_n$, and $\phi_1,\ldots,\phi_n$ are any expressions of the propositional calculus, then we can obtain another valid argument by replacing each $p_i$ by $\phi_i$ wherever it appears in the original argument. Note that we have to change the {\it premises} as well as the {\it conclusions} here, so that in order to use this method, we shall have to check that the new premises are still valid. But, for instance, we can be sure that
\smallskip
\hskip100pt\vbox{$(p\rightarrow q)\rightarrow(r\rightarrow s)$
$p\rightarrow q$
therefore $r\rightarrow s$}
\smallskip
\noindent is a valid argument form, because it's got by substituting the expressions $p\rightarrow q$ and $r\rightarrow s$ for the variables in Modus Ponens.
\medskip
\noindent{\bf Conditional arguments} If we have a valid argument of the form
\smallskip
\hskip100pt\vbox{$\phi_1$
$\phi_2$
$\ldots$
$\phi_n$
$\therefore\,\phi$,}
\smallskip
\noindent then for any expression $\psi$ the argument
\smallskip
\hskip100pt\vbox{$\psi\to\phi_1$
$\psi\to \phi_2$
$\ldots$
$\psi\to \phi_n$
$\therefore\,\psi\to\psi$,}
\smallskip
\noindent will also be valid. (Once again, we shall be changing the premises as well as the conclusions when we make this transformation.)
\medskip
{\bf Validation by extended argument} I give a couple of examples of arguments which can be validated by combining the methods described above.
\medskip
\noindent {\bf example} Consider the argument form
\smallskip
\hskip100pt\vbox{$p\rightarrow(\neg p)$
therefore $\neg p$.}
\smallskip
\noindent Of course this is very easy to do directly from a truth table:
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$\neg p$&$p\rightarrow(\neg p)$ \cr
\noalign{\hrule}
{\tt T}&{\tt F}&{\tt F}\cr
{\tt F}&{\tt T}&{\tt T}&$<$\cr}}$$
\noindent But it can also be done indirectly, as follows:
\halign{\qquad\hfil#.&\quad\hfil#\hfil&\quad#\hfil\cr
1&$p\rightarrow(\neg p)$&premise\cr
2&$(\neg p)\rightarrow(\neg p)$&tautology\cr
3&$p\vee(\neg p)$&tautology\cr
4&$\neg p$&division into cases, from (3), (1) and (2).\cr}
\medskip
\noindent{\bf example} For a more interesting example, consider the following:
\smallskip
\hskip100pt\vbox{$(p\rightarrow r)\vee(p\rightarrow s)$
$(p\wedge r)\rightarrow q$
$s\rightarrow q$
therefore $p\rightarrow q$.}
\smallskip
\noindent Here a truth table would have to have sixteen lines and up to ten columns (for $p$, $q$, $r$, $p\rightarrow r$, $p\rightarrow s$, $(p\rightarrow r)\vee(p\rightarrow s)$, $p\wedge r$, $(p\wedge r)\rightarrow q$, $s\rightarrow q$ and $p\rightarrow q$), so it's worth looking for alternative strategies if we want to do it by hand.
\lectureend{99/9}
One possible route uses the following steps. We need to know a couple of standard logical equivalences:
\Centerline{$(p\rightarrow r)\vee(p\rightarrow s)\equiv p\rightarrow(r\vee s)$}
\noindent (this is part of question 4 on Problem Sheet 3), and
\Centerline{$(p\wedge r)\rightarrow q\equiv p\rightarrow(r\rightarrow q)$}
\noindent (I gave this in the paragraph on `material implication'). We also need to know that the following argument form is valid:
\smallskip
\hskip100pt\vbox{$q$
therefore $p\rightarrow q$.}
\smallskip
\noindent There are lots of routes to this, but with only two variables the truth table is surely as efficient as any:
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$p\rightarrow q$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&$<$\cr
{\tt T}&{\tt F}&{\tt F}&\cr
{\tt F}&{\tt T}&{\tt T}&$<$\cr
{\tt F}&{\tt F}&{\tt T}&\cr}}$$
\noindent Or we can say
\halign{\qquad\hfil#.&\quad\hfil#\hfil&\quad#\hfil\cr
1&$q$&premise\cr
2&$(\neg p)\vee q$&disjunctive addition\cr
3&$p\rightarrow q$&logically equivalent to (2).\cr }
\noindent Let's call this argument form `weakening by a condition'. Now, equipped with these, we have
\halign{\qquad\hfil#.&\quad\hfil#\hfil&\quad#\hfil\cr
1&$(p\rightarrow r)\vee(p\rightarrow s)$&premise\cr
2&$(p\wedge r)\rightarrow q$&premise\cr
3&$s\rightarrow q$&premise\cr
4&$p\rightarrow(r\vee s)$&logically equivalent to (1)\cr
5&$p\rightarrow(r\rightarrow q)$&logically equivalent to (2)\cr
6&$p\rightarrow(s\rightarrow q)$&weakening by a condition, applied to (3)\cr
7&$p\rightarrow q$&division into cases, conditional on $p$, from (4), (5) and (6).\cr}
The trouble with this approach is that you have to know which way to go! The
idea of an argument as written out above is that you ought to be able to follow
it line-by-line. If you don't happen to know the logical equivalences used
in lines 4 and 5, you do know how to verify them (and the truth tables involved
are reasonably small, in particular, much smaller than the $16\times 10$ table
you would need for brute-force validation of the main argument); while the
trick in line 6 is in fact a standard `elementary' form which didn't happen to make it into the list of nine forms I gave. So, using the comment lines, you
should be able to see that the whole argument here is valid. But it gives no
clue how you might be able to do it on your own.
\lectureend{00/9}
\medskip
{\bf Validation by intelligent consideration of cases} I have so far described two types of validation: (i) exhaustive analysis by truth table, guaranteed to work in theory, but often far too big to be done even with the assistance of a machine; (ii) checking through an extended argument, which (if the argument is well designed) is manageable enough, but only if someone has shown you the way. For many arguments of intermediate size, however, we can find our own way through if we use our brains. As an example, I return to the same question dealt with above. The argument form is
\smallskip
\hskip100pt\vbox{$(p\rightarrow r)\vee(p\rightarrow s)$
$(p\wedge r)\rightarrow q$
$s\rightarrow q$
therefore $p\rightarrow q$.}
\smallskip
\noindent As I've already said, the full truth table for this has 16 lines and 10 columns, which is getting daunting. But of course we don't need the whole truth table. What we want to know is
\Centerline{is there any row in which every premise is {\tt T} but the conclusion is {\tt F}?}
\noindent because if there is such a row then the argument is invalid, while if there is no such row then the argument is valid. So let us try to understand what a row would have to look like if the premises
\Centerline{$(p\rightarrow r)\vee(p\rightarrow s)$,
\quad$(p\wedge r)\rightarrow q$,\quad
$s\rightarrow q$}
\noindent are all {\tt T} but the conclusion $p\rightarrow q$ is {\tt F}. Now in any such row we already know that $p$ is {\tt T} and $q$ is {\tt F}, because that's the only way in which $p\rightarrow q$ can be {\tt F}. Next, because $s\rightarrow q$ is {\tt T} but $q$ is {\tt F}, $s$ must be {\tt F}. Because $p$ is {\tt T} and $s$ is {\tt F}, $p\rightarrow s$ is {\tt F}. Because $(p\rightarrow r)\vee(p\rightarrow s)$ is {\tt T}, while $p\rightarrow s$ is {\tt F}, $p\rightarrow r$ is {\tt T}. Because $p$ and $p\rightarrow r$ are {\tt T}, $r$ is {\tt T}. Because $p$ and $r$ are both {\tt T}, $p\wedge r$ is {\tt T}. But now we have $p\wedge r$ {\tt T} and $q$ {\tt F}, so the premise $(p\wedge r)\rightarrow q$ is {\tt F}; oops!
Thus there {\it isn't} any such row, and the argument is valid.
I hope that you can follow the proof I've just set out. Of course this is another `extended argument'; I'm taking you along a path in which each step is supposed to be easy once you've been shown, but there's no promise that the path as a whole is easy to find. But I think in practice many people can find paths of this kind. The point is that we know just what we're looking for -- a row in a truth table with certain properties -- and each sentence in the argument amounts to saying a little more about what such a row would have to look like; starting from the truth value of $p\rightarrow q$, we deduce, in turn, the truth values of $p$, $q$, $s$, $p\rightarrow s$, $p\rightarrow r$, $r$, $p\wedge r$ and $(p\wedge r)\rightarrow q$. The point about these steps is that each question is an obvious one to ask; the only non-obvious thing is the order in which we should ask them. But the idea is that we don't spend very much time on any question; if we can't see the answer almost at once, we try something else. And this technique works in a great many cases.
\medskip
{\bf The paradox of logic} If you look at the last validation I gave, you'll see the word `because' appearing no fewer than six times. This is a sign that I am {\it using} one logical argument to {\it validate} another logical argument. This is what happens as soon as we come to any non-trivial bit of logic: we have to use logic to talk about logic. We have to lift ourselves by pulling on our own bootlaces. The best we can claim for the process is that the bits of logic we are using are somehow more certainly true than the bits of logic we're validating. This is a real problem, and not just a matter of the way I've arranged these lectures.
\lectureend{99/10}
\bigskip
\noindent{\bf Contradiction again}
Many logical arguments involve arguments-within-arguments. One of the
standard ways of doing this is the following. Suppose that we can't
decide whether a proposition $p$ is true. In this case, we can hunt
for consequences of $p$ {\it and} for consequences of $\neg p$, and get
a collection of facts
\Centerline{$p\rightarrow q_1,\ldots,p\rightarrow q_m$,}
\Centerline{$(\neg p)\rightarrow r_1,\ldots,(\neg p)\rightarrow r_n$.}
\noindent Now, if we are lucky, one of these $q_i$ or $r_j$ we turn up
in our search will turn out to be something we know is false. Suppose,
for instance, that we find that $(\neg p)\rightarrow r$, where we know
that $r$ is false, that is, that $\neg r$ is true. Then we can argue
as follows.
\halign{\qquad\hfil#&\quad\hfil#\hfil&\quad#\hfil\cr
1.&$(\neg p)\rightarrow r$&by previous argument\cr
2.&$\neg r$&by previous argument\cr
3.&$p$&Modus Tollens from (1) and (2).\cr}
\lectureend{00/10}
\noindent Similarly, if we can prove that $p\rightarrow q$, where $q$ is
false, then we have
\halign{\qquad\hfil#&\quad\hfil#\hfil&\quad#\hfil\cr
1.&$p\rightarrow q$&by previous argument\cr
2.&$\neg q$&by previous argument\cr
3.&$\neg p$&Modus Tollens from (1) and (2).\cr}
\noindent The thing you {\it must} remember is that
\inset{-- you can sometimes prove $p$ by looking at consequences of
$\neg p$}
\inset{-- you can sometimes prove $\neg p$ by looking at consequences of
$p$.}
\noindent But you can {\it never} prove $p$ by looking at its
consequences. If you want to know whether an animal is a cow, you
might be able to show that
\Centerline{if it's a cow, it has four legs,}
\Centerline{if it's a cow, it has horns.}
\noindent So you count the legs and look for horns. I don't say
exactly that this is silly; but it can't prove that you've got a cow
rather than a goat. It can only prove that you {\it haven't} got a
cow. To prove that you {\it have} got a cow, you need to look at what
the consequences of its {\it not} being a cow are.
Quite often the consequence of $p$ (or of $\neg p$) which we know to be
false is false because it's logically impossible, that is, is logically
equivalent to $\bot$; that's the form of `contradiction' which I listed as an elementary argument form. Another common pattern is that we find that we
can prove
\Centerline{$p\rightarrow(\neg p)$.}
\noindent In this case we deduce $\neg p$, as in one of the examples above.
\medskip
\noindent{\bf Validation again} `Proofs by contradiction' can be used for any type of logical analysis; in particular, they can be useful in validations. Suppose we have an argument form
\smallskip
\hskip100pt\vbox{$\phi_1$
$\phi_2$
$\ldots$
$\phi_n$
$\therefore\,\psi$,}
\smallskip
\noindent and wish to validate it. One method is to look at the string
\Centerline{$\phi_1$, $\phi_2,\ldots,\phi_n$, $\neg\psi$}
\noindent consisting of the premises of the argument and the {\it negation} of the conclusion, and see what we can deduce from this list. If we can find a valid argument
\smallskip
\hskip100pt\vbox{$\phi_1$
$\ldots$
$\phi_n$
$\neg\psi$
$\therefore\,\theta_1$
$\therefore\,\theta_2$
$\ldots$
$\therefore\,\theta_m$
$\therefore\,\bot$,}
\smallskip
\noindent then we can argue as follows. On any line of the truth table on which $\phi_1,\ldots,\phi_n$ are all {\tt T} but $\psi$ is {\tt F}, then all the premises $\phi_1,\ldots,\phi_n$, $\neg\psi$ of {\it this} argument will be {\tt T}, so all the conclusions $\theta_1,\ldots,\theta_m$, $\bot$ of this argument will be {\tt T}; but $\bot$ is never {\tt T}, so there's no such line. Accordingly, on any line on which $\phi_1,\ldots,\phi_n$ are all {\tt T}, $\psi$ is also {\tt T}, and the original argument was valid.
\medskip
\noindent{\bf The example again} This is actually very close to the `intelligent' validation given above. If we take our argument
\smallskip
\hskip100pt\vbox{$(p\rightarrow r)\vee(p\rightarrow s)$
$(p\wedge r)\rightarrow q$
$s\rightarrow q$
therefore $p\rightarrow q$,}
\smallskip
\noindent negate the conclusion, and argue from the consequent list, we can get
\halign{\qquad\hfil#.&\quad\hfil#\hfil&\quad\hfil#\hfil&\quad#\hfil\cr
1&&$(p\rightarrow r)\vee(p\rightarrow s)$\cr
2&&$(p\wedge r)\rightarrow q$\cr
3&&$s\rightarrow q$\cr
4&&$\neg(p\rightarrow q)$\cr
5&$\therefore$&$p\wedge(\neg q)$&logically equivalent to (4)\cr
6&$\therefore$&$\neg q$&conjunctive simplification from (5)\cr
7&$\therefore$&$\neg s$&Modus Tollens from (3) and (6)\cr
8&$\therefore$&$p$&conjunctive simplification from (6)\cr
9&$\therefore$&$p\wedge(\neg s)$&conjunctive addition from (8) and (7)\cr
10&$\therefore$&$\neg(p\rightarrow s)$&logically equivalent to (9)\cr
11&$\therefore$&$p\rightarrow r$&disjunctive syllogism from (1) and (10)\cr
12&$\therefore$&$r$&Modus Ponens from (11) and (8)\cr
13&$\therefore$&$p\wedge r$&conjuntive addition from (8) and (12)\cr
14&$\therefore$&$q$&Modus Ponens from (2) and (13)\cr
15&$\therefore$&$q\wedge(\neg q)$&conjunctive addition from (14) and (6)\cr
&$\therefore$&$\bot$&logically equivalent to (15).\cr}
\noindent So once again the original argument was valid.
Of course this is rather long, and in particular is longer than the seven-line validation given a couple of lectures ago. But I think it's easier to find, because it deals only with very simple propositions; after the original premises, no line has more than two atomic propositions nor more than one pair of brackets. So you have some chance of finding it by following the rule that you won't look at anything complicated, but you will check all simple possibilities.
\lectureend{99/11}
\bigskip
\noindent{\bf Example: Knights and Knaves}
A traditional type of logical puzzle uses the Island of Knights and Knaves.
On this island, everyone is either a Knight, and always tells the truth,
or a Knave, and always lies. In most of the stories, they know which
is which, but you don't. They are nearly always peaceful and
talkative, but you sometimes have to listen very carefully to be sure
you know which question they are answering. Now:
\inset{On the Island of Knights and Knaves, you meet two persons, who
speak as follows:
\inset{{\it Amos} Barnabas is a knight.
{\it Barnabas} Amos and I are of different types.}}
\noindent What can we deduce from this?
We know that
\Centerline{if Amos is a knight, then Barnabas is a knight,}
\Centerline{if Amos is a knave, then Barnabas is a knave.}
\noindent From this we can already deduce that Amos and Barnabas are of the same type; therefore Barnabas is lying and both are knaves.
\lectureend{00/11}
Formally, the argument goes as follows. If $p$ stands for `Amos is a knight' and $q$ for `Barnabas is a knight', then we have premises
\Centerline{$p\leftrightarrow q$,
\quad$q\leftrightarrow(p\leftrightarrow(\neg q))$,}
\noindent and our conclusion is
\Centerline{$(\neg p)\wedge(\neg q)$.}
\noindent But this is easy, because we have the truth table
$$\vbox{\offinterlineskip
\halign{\hfil#\hfil\strut
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\ttablerule\hfil#\hfil
&\hfil#\hfil\cr
$p$&$q$&$p\leftrightarrow q$&$p\leftrightarrow(\neg q)$
&$q\leftrightarrow(p\leftrightarrow(\neg q))$&$(\neg p)\wedge(\neg q)$\cr
\noalign{\hrule}
{\tt T}&{\tt T}&{\tt T}&{\tt F}&{\tt F}&{\tt F}\cr
{\tt T}&{\tt F}&{\tt F}&{\tt T}&{\tt F}&{\tt F}\cr
{\tt F}&{\tt T}&{\tt F}&{\tt T}&{\tt T}&{\tt F}\cr
{\tt F}&{\tt F}&{\tt T}&{\tt F}&{\tt T}&{\tt T}&$<$\cr
}}$$
\bigskip
\noindent{\bf Valid arguments and tautologies}
There is a general result which will be useful later for theoretical reasons. An argument form
\smallskip
\hskip100pt\vbox{$\phi_1$
$\ldots$
$\phi_n$
therefore $\psi$}
\smallskip
\noindent is valid iff the expression
\Centerline{$(\phi_1\wedge\phi_2\wedge\ldots\wedge\phi_n)\rightarrow\psi$}
\noindent is a tautology. To see this, consider the truth table. The argument form is valid iff $\psi$ is {\tt T} on every line on which $\phi_1,\ldots,\phi_n$ are all {\tt T}.
But these are just the lines on which the conjunction $\phi_1\wedge\ldots\wedge\phi_n$ is {\tt T}. So the argument is valid iff on every line we {\it either} have $\psi$ {\tt T} {\it or} we have $\phi_1\wedge\ldots\wedge\phi_n$ {\tt F}; but this just means that $(\phi_1\wedge\ldots\wedge\phi_n)\rightarrow\psi$ must be {\tt T} on every line, that is, is a tautology.
\vfill
\filename{cc1125.tex}
\versiondate{26.1.01}
\def\notP{\hbox{$\hskip 0.3em plus 0pt minus 0pt\not\hskip-0.3em plus0pt
minus 0pt P\,$}}
\medskip
\noindent{\bf Predicate Calculus}
We come now to the third and final part of this course. If you look at
the sentences considered in the first two parts, they are all
straightforward assertions like `$2\le 3$', `if Amos is a knight,
Barnabas is not a knave' and so on. But many of the things we were
looking at last term had phrases like `for every $n$', `there is some
$x$'. Sentences of this form are the subject of the {\bf predicate
calculus}.
The essence of these new sentences is that
\inset{(i) they contain {\it variables}, generally labelled $x$ or $n$
or $A$ to give a hint of what kind of object they refer to, but not
normally specified to the point that you know exactly which number or
set is being considered,}
\inset{(ii) they contain {\it quantifiers}, either informal (`for every
set $A$', `whenever $n$ is an integer', `we can find a real number $x$')
or formal (using the symbols $\forall$ or $\exists$) introducing the
variables.}
\noindent So instead of the argument
\smallskip
\hskip100pt\vbox{if Socrates is a man, Socrates is mortal,
Socrates is a man,
therefore Socrates is mortal,}
\smallskip
\noindent we can say
\smallskip
\hskip100pt\vbox{for every $x$, if $x$ is a man, $x$ is mortal,
Socrates is a man,
therefore Socrates is mortal,}
\smallskip
\noindent or, in less formal language,
\smallskip
\hskip100pt\vbox{all men are mortal,
Socrates is a man,
therefore Socrates is mortal.}
\smallskip
\noindent I am going to try to explain the rules of these new sentences,
and to incorporate them into the structure of the propositional calculus
and the rules of natural deduction we have already seen.
\medskip
\noindent{\bf Universal statements} are of the form
\Centerline{`$\forall\,x,\,Q(x)$',
\quad`for every $x$, $Q(x)$'.}
\noindent The idea is that the proposition $\forall\,x,\,Q(x)$ will be T
if the proposition $Q(a)$ is T for every $a$. If you like, the
sentence $\forall\,x,\,Q(x)$ is logically equivalent to a conjunction
\Centerline{$Q(a)\wedge Q(b)\wedge Q(c)\wedge\ldots$}
\noindent But this way of looking at it, though perhaps useful to begin
with, is very dangerous if you don't have an extremely clear notion of
what sort of $a$, $b$, $c$ to look at. If there are infinitely many
possibilities for $x$, then we seem to have an infinitely long sentence;
and if there are uncountably many possibilities, even an infinitely long
sentence will miss most of them out. So part of what I want to do here
is to set out rules which don't depend on any particular listing of the
relevant $x$'s.
Actually there is a question about whether we need to try to think about
truly universal sentences $\forall\,x,\,Q(x)$. Every universal sentence
relevant to computer science, and nearly all the sentences of
interest to mathematicians, have some restrictions on $x$ built into them.
Ordinarily we have sentences like
\Centerline{for every natural number $n$, $n^2\ge n$,}
\noindent or
\Centerline{$\forall\,n\in\Bbb N,\,n^2\ge n$,}
\noindent in which we are told at the start that the variable is a
particular type of object, or belongs to a particular set. So from now
on I'll regularly mention on these, since it's generally easier to form a
mental picture of what's happening, even though the formulae come out a
few symbols longer.
\lectureend{00/12}
\medskip
\noindent{\bf Existential statements} are of the form
\Centerline{`$\exists\,x,\,Q(x)$',
\quad`there is some $x$ such that $Q(x)$'.}
\noindent Here, the proposition $\exists\,x,\,Q(x)$ will be T if the
proposition $Q(a)$ is T for some $a$. If you like, the sentence
$\exists\,x,\,Q(x)$ is logically equivalent to the disjunction
\Centerline{$Q(a)\vee Q(b)\vee Q(c)\vee\ldots$}
\noindent where $a$, $b,\ldots$ run through the possible things which
$x$ could be. Once again, this is dangerous if you don't know exactly
what's going on. And once again, we do usually have a definite notion
of what kind of $x$ we're looking for, so our sentences are normally of
the form
\Centerline{$\exists\,x\in\Bbb R,\,x^2=2$,}
\noindent with some set declared as the domain of the
quantifier $\exists$.\footnote{A new word here: $\forall$ and $\exists$ are {\bf quantifiers}.}
Note that {\it every} `for all' or `there is a' must be attached to some
variable; and in a complete sentence, every variable should be
introduced by a quantifier. If I just write
\Centerline{$x\ne 2$ or $x=3$}
\noindent you are left wondering whether I mean
\Centerline{for every $x$, either $x\ne 2$ or $x=3$}
\noindent (starting with a universal quantifier) or
\Centerline{there is some $x$ such that $x\ne 2$ or $x=3$}
\noindent (starting with an existential quantifier).
\lectureend{99/12}
\medskip
\noindent{\bf Negation} The first rule to establish concerning these new
sentences is the form which {\it negations} take. Recall that
\Centerline{$\neg(p\wedge q)\equiv(\neg p)\vee(\neg q)$.}
\noindent If we interpret $\forall\,x,\,Q(x)$ as a conjunction
\Centerline{$Q(a)\wedge Q(b)\wedge\ldots$,}
\noindent then it becomes natural to say that $\neg(\forall\,x,\,Q(x))$
ought to be logically equivalent to the disjunction
\Centerline{$(\neg Q(a))\vee(\neg Q(b))\vee\ldots$,}
\noindent which corresponds to the existential statement
\Centerline{$\exists\,x,\,\neg Q(x)$.}
\noindent In English, sort of: the negation of the sentence `for every
$x$, $Q(x)$' is logically equivalent to the sentence `there is some $x$
such that $\neg Q(x)$'.
The negation of the sentence
\Centerline{all cows are brown}
\noindent is equivalent to
\Centerline{there is a cow which isn't brown.}
\noindent Similarly,
\Centerline{$\neg(\exists\,x,\,Q(x))$}
\noindent is logically equivalent to
\Centerline{$\forall\,x\,\neg Q(x)$,}
\noindent just as
\Centerline{$\neg(p\vee q)$}
\noindent is logically equivalent to
\Centerline{$(\neg p)\wedge(\neg q)$.}
\noindent The rule is:
\inset{to negate any sentence starting with a quantifier $\forall$ or
$\exists$, change the quantifier to the other one and negate the rest of
the sentence.}
This works equally well for restricted quantifiers. The negation of
the sentence
\Centerline{$\forall\,x\in\Bbb R,\,x^2\ge 0$}
\noindent is the sentence
\Centerline{$\exists\,x\in\Bbb R,\,x^2<0$.}
You may have noticed that I haven't been talking about truth tables
here. One of the very important philosophical differences between the
predicate calculus and the propositional calculus is that all the
logical implications of the propositional calculus, and all the
corresponding rules of natural deduction, can be supported by truth
tables, which may be uncomfortably large but are finite. The predicate
calculus doesn't deal with finite truth tables, and the rules we use are
based on analogies like the one above (treating a universal statement as
a conjunction and an existential statement as a disjunction). I am not
going to discuss such philosophical issues in this course. But it
means that there is at least a possibility that there might be
alternative logical rules for the predicate calculus; `intuitionism' is
one such alternative approach which has attracted computer
scientists.
\medskip
\noindent{\bf Universal conditional statements} A very common kind of
universal statement is of the form
\Centerline{for every $x$, if $P(x)$ then $Q(x)$.}
\noindent In fact any sentence with a restricted quantifier can be
thought of in this way: `$\forall\,n\in\Bbb N,\,n^2\ge n$' can be
interpreted as `$\forall\,n$, if $n\in\Bbb N$ then $n^2\ge n$'.
The negation of such a sentence is, or is equivalent to,
\Centerline{there is some $x$ such that $\neg(P(x)\rightarrow Q(x))$.}
\noindent Now applying the ordinary rule that
\Centerline{$\neg(p\rightarrow q)\equiv\neg(q\vee(\neg p))\equiv(\neg
q)\wedge p$,}
\noindent we can transform this into
\Centerline{there is some $x$ such that $P(x)$ and $\neg Q(x)$.}
\medskip
\noindent{\bf Vacuous truth} Consider the sentences
\Centerline{All Prince Charles' daughters are brunettes,}
\Centerline{All Prince Charles' daughters are blondes.}
\noindent Are these true or false? If we look at their negations, we
get
\Centerline{Prince Charles has a daughter who is not a brunette,}
\Centerline{Prince Charles has a daughter who is not a blonde.}
\noindent Now both of those (so far as I know) are false. On the
principle that for {\it any} sentence $p$, exactly one of $p$ and $\neg
p$ must be true, we are forced to conclude that both the original
sentences are true. So, putting them together, we get
\Centerline{All Prince Charles' daughters are both brunette and blonde.}
\noindent This is yet another peculiarity of formal logic. If we have
a universal statement
\Centerline{$\forall\,x,\, P(x)\rightarrow Q(x)$}
\noindent where $P(x)$ is never true, then the statement is true;
because
\Centerline{$\forall\,x,\, P(x)\rightarrow Q(x)$}
\noindent means just that
\Centerline{$\forall\,x,\, Q(x)\vee(\neg P(x))$,}
\noindent and since
\Centerline{$\forall\,x,\,\neg P(x)$,}
\noindent this is true. Similarly
\Centerline{$\forall\,x\in\emptyset,\,Q(x)$}
\noindent is necessarily true for any grammatically correct predicate
$Q(x)$.
Since this tells us rather less about $Q(x)$ than we might have hoped to
learn, it's helpful to remark that the sentence is `vacuously' true;
it's true because you can't catch any $x$'s to check. Of course the
sentence
\Centerline{All Prince Charles' daughters are both brunette and blonde}
\noindent (if true) is telling us something, not about brunettes and
blondes, but about Prince Charles.
\lectureend{99/13, 00/13}
\bigskip
\noindent{\bf Multiple quantifiers} As soon as we let ourselves have
quantifiers at all, we are likely to want more than one. (If we want
to talk about {\tt quicksort}, we can expect to have to deal with
sentences like `for all $m$, $n\in\Bbb N$, if {\tt left} $=m$ and {\tt
right} $=n$, then for all $k$, $l\in\Bbb N$, if {\tt lo} $=k$ and {\tt
hi} $=l$, $\ldots$'.) Typically we get sentences like
\Centerline{for every real number $x$ there is a natural number $n$ such
that $x\le n$,}
\noindent or
\Centerline{$\forall\,x\in\Bbb R,\,\exists\,n\in\Bbb N,\,x\le
n$.}
\noindent These give rise to some new problems. Many of these can be
dealt with by rigorously writing in the brackets I've left out:
\Centerline{$\forall\,x\in\Bbb R,\,\bigl(\exists\,n\in\Bbb N,\,x\le n\bigr)$.}
\noindent You have to remember which variable was introduced first, and
be very careful not to shuffle them around without careful thought.
For instance, it is true that for every real number $x$ there is a
natural number $n$ such that $x\le n$. But if we swap the quantifiers,
we get
\Centerline{$\exists\,n\in\Bbb N,\, \bigl(\forall\,x\in\Bbb
R,\,x\le n\bigr)$,}
\noindent which is false. {\it Changing the order of quantifiers is
dangerous.} Think of it as a game. In the sentence
\Centerline{$\forall\,x\in\Bbb R,\,\exists\,n\in\Bbb N,\,x\le n$,}
\noindent the $x$-player moves first and the $n$-player, having seen $x$, moves second; of course he can win. But in the sentence
\Centerline{$\exists\,n\in\Bbb N,\,\forall\,x\in\Bbb R,\,x\le n$,}
\noindent the $n$-player has to make a move which can defeat any possible reply by the $x$-player, and there's no such move. Changing the order of quantifiers is like changing the order of the moves in a game; it's liable to change the result.
Yet another way of thinking of the same thing: if you think of
\Centerline{$\Exists y,\,P(x,y)$}
\noindent as a shorthand for
\Centerline{$P(x,\alpha)\vee P(x,\beta)\vee\ldots$,}
\noindent then
\Centerline{$\Forall x,\Exists y,\,P(x,y)$}
\noindent becomes
\Centerline{$(P(a,\alpha)\vee P(a,\beta)\vee\ldots)\wedge(P(b,\alpha)\vee P(b,\beta)\vee\ldots)\wedge\ldots$,}
\noindent while
\Centerline{$\Exists y,\Forall x,\,P(x,y)$}
\noindent becomes
\Centerline{$(P(a,\alpha)\wedge P(b,\alpha)\wedge\ldots)\vee(P(a,\beta)\wedge P(b,\beta)\wedge\ldots)\vee\ldots$.}
\noindent Now even if $x$ can only be either $a$ or $b$, and $y$ can only be either $\alpha$ or $\beta$, then we have expressions like
\Centerline{$(p\vee q)\wedge(r\vee s)$,
\quad$(p\wedge r)\vee(q\wedge s)$,}
\noindent and these are not logically equivalent; the second implies the first, but not the other way about.
Apart from this, a lot of the rules we've already had still work. If
we want to negate a sentence with a string of quantifiers, for instance,
we can do it as follows
\Centerline{$\neg\bigl(\exists\,n\in\Bbb N,\,(\forall\,x\in\Bbb R,\,x\le n)\bigr)$}
\noindent is logically equivalent to
\Centerline{$\forall\,n\in\Bbb N,\,\neg\bigl(\forall\,x\in\Bbb R,\,x\le
n\bigr)$}
\noindent which is logically equivalent to
\Centerline{$\forall\,n\in\Bbb N,\,\bigl(\exists\,x\in\Bbb R,\,\neg(x\le n)\bigr)$}
\noindent which is equivalent to
\Centerline{$\forall\,n\in\Bbb N,\,\bigl(\exists\,x\in\Bbb R,\,n0$', and $Q(x)$ stands for $`x^5>0$'. And put like that it won't do at all. For a start, $A$ could be the empty set, so that any sentence starting `$\forall\,x\in A,\,\ldots$' would be true, and any sentence starting `$\exists\,x\in A,\,\ldots$' would be false. Or, closer to the original expression (since we know very well that $\Bbb R$ is non-empty) we might have
\smallskip
\hskip100pt\vbox{if $x$ is real and $x^4<0$, then $x^6<0$,
therefore there is a real number $x$ such that $x^6<0$.}
\smallskip
\noindent Here the premise is true but the conclusion is false; so the argument is untrustworthy.
\lectureend{00/17}
\medskip
{\bf Case study 5} Consider the argument form
\smallskip
\hskip100pt\vbox{$\forall\,x\in A,\,\exists\,y\in A,\,P(x,y)$,
$\forall\,x\in A,\,\forall\,y\in A,\,\forall\,z\in A,\,(P(x,z)\wedge P(y,z))\rightarrow (x=y)$,
$\therefore\,\forall\,y\in A,\,\exists\,x\in A,\,P(x,y)$.}
\smallskip
\noindent Is this valid?
If you really can't guess the answer to a question like this, it can be very hard indeed. To {\it validate} the argument you will either have to find some more or less philosophical approach to make it `obvious', or reduce it to one of the forms you have already seen. To {\it invalidate} it you will have to find some example in which the premises are true and the conclusion false, and you have very little idea where to look. What makes things much worse is not even knowing which of these you have to do!
Usually, faced with such a dilemma, the thing to do is to try the two approaches alternately. Just occasionally one finds that one's mind can engage a problem with parallel processing, tackling it from both sides at once, and seeing which side wins. But this rarely works with anything which requires a new idea. Normally, when you can see two different ways of tackling a problem, and don't know which to use, you should make a serious effort with one of them, and turn to the other when the first seems to be getting nowhere.
As it happens, the argument here is invalid, so any attempt to validate it is bound to fail, and I won't try to explore what a validation would look like if there were one. Instead, I will give just the approach which starts from the assumption that the argument is invalid and hunts for an example to demonstrate this.
The first step is to use a technique which we have already seen in `natural deduction'. The argument is invalid if there is an example in which the premises are true but the conclusion is false, that is, the negation of the conclusion is true. So write down what the negation of the conclusion is:
\Centerline{$\neg(\forall\,y\in A,\,\exists\,x\in A,\,P(x,y))$.}
\noindent Now we can move the $\neg$ past the $\forall$ and the $\exists$, as usual, to get
\Centerline{$\exists\,y\in A,\,\forall\,x\in A,\,\neg P(x,y)$,}
\noindent and we are hunting for an example in which this is true, together with the original premises
\Centerline{$\forall\,x\in A,\,\exists\,y\in A,\,P(x,y)$,}
\Centerline{$\forall\,x\in A,\,\forall\,y\in A,\,\forall\,z\in A,\,(P(x,z)\wedge P(y,z))\rightarrow (x=y)$.}
\noindent At this point it is worth noting the following. For any statement $P(x,y)$ about two things, there is a corresponding relation $R$, where we write $x\,R\,y$ if $P(x,y)$ is true. If we re-write the hypotheses in this language, we have
\Centerline{there is some $y\in A$ such that $x\,\notR\,y$ for every $x\in A$,}
\Centerline{for every $x\in A$ there is some $y\in A$ such that $x\,R\,y$,}
\Centerline{for all $x$, $y$ and $z$ in $A$, if $x\,R\,z$ and $y\,R\,z$ then $x=y$.}
\noindent This is worth while because it gives us a chance to use ideas from CC111. For instance, the hypothesis
\Centerline{for every $x\in A$ there is some $y\in A$ such that $x\,R\,y$}
\noindent was actually given a name there; it says that $A$ is included in the `domain' of the relation $R$. Spotting this kind of correspondence may not solve a problem, but it will help a great deal in telling you what sort of ideas may be useful.
Now if we think of $R$ as a relation, we can try to look at its incidence matrix $M_R$. What does $M_R$ have to look like? Consider the negated conclusion:
\Centerline{there is some $y\in A$ such that $x\,\notR\,y$ for every $x\in A$.}
\noindent What this says is that there is some column of $M_R$ in which every entry is $0$. (Remember that the $(x,y)$-entry of $M_R$ lies in the $x$-row and the $y$-column, and is $1$ if $x\,R\,y$, $0$ if $x\,\notR\,y$.) Since there is at least one column of zeroes, we might as well put it at the beginning:
$$\Matrix{0&.&.&.&.\cr
0&.&.&.&.\cr
0&.&.&.&.\cr
0&.&.&.&.\cr
0&.&.&.&.\cr}$$
\noindent Next, the premise
\Centerline{for every $x\in A$ there is some $y\in A$ such that $x\,R\,y$}
\noindent declares that in every {\it row} there is at least one $1$. In particular, there's a $1$ in the top row; let's put it in the first available spot:
$$\Matrix{0&1&.&.&.\cr
0&.&.&.&.\cr
0&.&.&.&.\cr
0&.&.&.&.\cr
0&.&.&.&.\cr}$$
\noindent If we now look at the other premise
\Centerline{for all $x$, $y$ and $z$ in $A$, if $x\,R\,z$ and $y\,R\,z$ then $x=y$,}
\noindent this says that in any column (corresponding to a fixed $z$) there can be at most one $1$. So we can fill in some more of our matrix: the rest of the second column must be zeroes:
$$\Matrix{0&1&.&.&.\cr
0&0&.&.&.\cr
0&0&.&.&.\cr
0&0&.&.&.\cr
0&0&.&.&.\cr}$$
\noindent Returning to the premise
\Centerline{for every $x\in A$ there is some $y\in A$ such that $x\,R\,y$,}
\noindent we see that there has to be a $1$ in the second row; put it in the first blank slot:
$$\Matrix{0&1&.&.&.\cr
0&0&1&.&.\cr
0&0&.&.&.\cr
0&0&.&.&.\cr
0&0&.&.&.\cr}.$$
\noindent Remember that no column can have more than one $1$, so that the rest of the third column's entries are $0$,
$$\Matrix{0&1&0&.&.\cr
0&0&1&.&.\cr
0&0&0&.&.\cr
0&0&0&.&.\cr
0&0&0&.&.\cr},$$
\noindent that there must be a $1$ in the third row,
$$\Matrix{0&1&0&.&.\cr
0&0&1&.&.\cr
0&0&0&1&.\cr
0&0&0&.&.\cr
0&0&0&.&.\cr},$$
\noindent the rest of the fourth column must be zeroes,
$$\Matrix{0&1&0&0&.\cr
0&0&1&0&.\cr
0&0&0&1&.\cr
0&0&0&0&.\cr
0&0&0&0&.\cr},$$
\noindent there must be a $1$ in the fourth row,
$$\Matrix{0&1&0&0&.\cr
0&0&1&0&.\cr
0&0&0&1&.\cr
0&0&0&0&1\cr
0&0&0&0&.\cr},$$
\noindent and the rest of the fifth column must be zeroes, and look at what we've got:
$$\Matrix{0&1&0&0&0\cr
0&0&1&0&0\cr
0&0&0&1&0\cr
0&0&0&0&1\cr
0&0&0&0&0\cr}.$$
\noindent Is this a relation of the type we're looking for? It's got a blank column, and no column has more than one $1$ in it; that is, the two requirements
\Centerline{there is some $y\in A$ such that $x\,\notR\,y$ for every $x\in A$,}
\Centerline{for all $x$, $y$ and $z$ in $A$, if $x\,R\,z$ and $y\,R\,z$ then $x=y$}
\noindent are satisfied. But there's also a blank row, so the requirement
\Centerline{for every $x\in A$ there is some $y\in A$ such that $x\,R\,y$}
\noindent hasn't been achieved. But remember at this point that we're not limited to a $5\times 5$ matrix. If we give ourselves an extra row and column we can get a $1$ into the fifth row, as follows:
$$\Matrix{0&1&0&0&0&0\cr
0&0&1&0&0&0\cr
0&0&0&1&0&0\cr
0&0&0&0&1&0\cr
0&0&0&0&0&1\cr
0&0&0&0&0&0\cr}.$$
\noindent Of course this just shifts the problem to the sixth row, and we have to do it again:
$$\Matrix{
0&1&0&0&0&0&0\cr
0&0&1&0&0&0&0\cr
0&0&0&1&0&0&0\cr
0&0&0&0&1&0&0\cr
0&0&0&0&0&1&0\cr
0&0&0&0&0&0&1\cr
0&0&0&0&0&0&0\cr
}.$$
\noindent At this point we need a flash of light. {\it What happens if we go on for ever?} Why not have an {\it infinite} set $A$? I know that in CC111 I never tried to draw an incidence matrix for a relation on an infinite set, but that's no reason to refuse to do it now. Imagine that we simply carried on:
$$\Matrix{
0&1&0&0&0&0&0&\ldots \cr
0&0&1&0&0&0&0&\ldots \cr
0&0&0&1&0&0&0&\ldots \cr
0&0&0&0&1&0&0&\ldots \cr
0&0&0&0&0&1&0&\ldots \cr
0&0&0&0&0&0&1&\ldots \cr
0&0&0&0&0&0&0&\ldots \cr
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\cr
}.$$
\noindent The pattern is clear enough: a diagonal line of $1$'s and the rest $0$'s.
With the pattern established, we can try to construct an actual formula to match it. Since the set $A$ is infinite, let's make it the simplest infinite set, $\Bbb N$. Then we can annotate our matrix:
$$\bordermatrix{
&\scriptstyle{0}&\scriptstyle{1}&\scriptstyle{2}&\scriptstyle{3}
&\scriptstyle{4}&\scriptstyle{5}&\scriptstyle{6}&\ldots\cr
\scriptstyle{0}&0&1&0&0&0&0&0&\ldots \cr
\scriptstyle{1}&0&0&1&0&0&0&0&\ldots \cr
\scriptstyle{2}&0&0&0&1&0&0&0&\ldots \cr
\scriptstyle{3}&0&0&0&0&1&0&0&\ldots \cr
\scriptstyle{4}&0&0&0&0&0&1&0&\ldots \cr
\scriptstyle{5}&0&0&0&0&0&0&1&\ldots \cr
\scriptstyle{6}&0&0&0&0&0&0&0&\ldots \cr
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\cr
}.$$
\noindent What relation does this correspond to? We have $0\,R\,1$, $1\,R\,2$, $2\,R\,3$, $\ldots$; that is, $m\,R\,(m+1)$ for every $m\in\Bbb N$, while $m\,\notR\,n$ for all other $m$ and $n$. Going back to the predicate $P(x,y)$, what this seems to mean is that
$P(x,y)$ if, and only if, $y=x+1$. In this case we have
\Centerline{there is some $y\in A$ such that $(x,y)\notin P$ for every $x\in A$,}
\noindent because $0\ne x+1$ for every $x\in\Bbb N$,
\Centerline{for all $x$, $y$ and $z$ in $A$, if $(x,z)\in P$ and $(y,z)\in P$ then $x=y$}
\noindent because if $z=x+1=y+1$ then $x=y$, and
\Centerline{for every $x\in A$ there is some $y\in A$ such that $(x,y)\in P$}
\noindent because we can take $y=x+1$ every time. Thus
\Centerline{$\forall\,x\in A,\,\exists\,y\in A,\,P(x,y)$}
\noindent and
\Centerline{$\forall\,x\in A,\,\forall\,y\in A,\,\forall\,z\in A,\,(P(x,z)\wedge P(y,z))\rightarrow (x=y)$}
\noindent are true, but (because of the empty first column)
\Centerline{$\forall\,y\in A,\,\exists\,x\in A,\,P(x,y)$}
\noindent is false, and the argument is invalid.
I have written this out at such length to try to show you that there is a route to the answer which depends on determination rather than inspiration. You have to be absolutely clear about what you are looking for, but building examples of this kind is rather like writing programs: once you have the specification clear and you have mastered the language, you can hope to get there in the end. The other feature of the argument is the fact that there is no {\it finite} set $A$ which will do. Any example will necessarily involve an infinite set. However the set $\Bbb N$, the simplest infinite set, is enough; and as it happens this is true for {\it any} example in the predicate calculus\footnote{This is a form of the {\bf L\"owenheim-Skolem theorem}, which is the first basic theorem in model theory.}.
\inset{I ought to remark, for the sake of anyone who might wish to go farther with these ideas, that the formula `$x=y$' which appears in the formal statements above gives a good deal of trouble if you look at it too hard. However there are no real traps in it, so it will do no harm if you take it as `obvious' what it means.}
\lectureend{00/18}
\medskip
{\bf Case study 6} Here is another theorem from elementary set theory:
\inset{`For any relations $R$ and $S$, $(R\circ S)^{-1}=S^{-1}\circ R^{-1}$'.}
\noindent What would a proof of this in the language of the predicate calculus look like?
First, as usual, let's try to set out a mathematical argument without forcing ourselves into rigid formulae.
First, we have to remember what $\circ$ and $^{-1}$ mean in this context:
\Centerline{$x(R\circ S)y$ iff there is a $z$ such that $x\,R\,z$ and $z\,S\,y$,}
\Centerline{$x\,R^{-1}\,y$ iff $y\,R\,x$.}
\noindent But there's also the question of what we mean when we say that two relations are equal: we say that $R=S$ if
\Centerline{$R=S$ if $x\,R\,y$ just when $x\,S\,y$.}
\noindent So a mathematical proof might go something like this:
$$\eqalign{x(R\circ S)^{-1}y
&\iff y(R\circ S)x\cr
&\iff\text{ there is some }z\text{ such that }y\,R\,z\text{ and }z\,S\,x\cr
&\iff\text{ there is some }z\text{ such that }z\,R^{-1}\,y
\text{ and }x\,S^{-1}\,z\cr
&\iff\text{ there is some }z\text{ such that }x\,S^{-1}\,z
\text{ and }z\,R^{-1}\,y\cr
&\iff x(S^{-1}\circ R^{-1})y.\cr}$$
Now all the translations of the basic terms are all expressible in the predicate calculus, so we have some hope of sorting things out according to the rules of this course.
After translation, the formula is something like
\Centerline{$\forall\,R,\,\forall\,S,\,(\Cal R(R)\wedge\Cal R(S))
\rightarrow((R\circ S)^{-1}=S^{-1}\circ R^{-1})$,}
\noindent where I write `$\Cal R(R)$' to mean `$R$ is a relation'. Now this expands to
\Centerline{$\forall\,R,\,\forall\,S,\,(\Cal R(R)\wedge\Cal R(S))
\rightarrow(\forall\,x,\,\forall\,y,\,x(R\circ S)^{-1}y\leftrightarrow
x(S^{-1}\circ R^{-1})y)$,}
\noindent which is itself shorthand for
$$\eqalign{\forall\,R,\,\forall\,S,\,&(\Cal R(R)\wedge\Cal R(S))
\rightarrow(\forall\,x,\,\forall\,y,\,\cr
&y(R\circ S)x\leftrightarrow
(\exists\,z,\,x\,S^{-1}\,z\,\wedge\,z\,R^{-1}\,y)),\cr}$$
\noindent and expands again to
$$\eqalign{\forall\,R,\,\forall\,S,\,& (\Cal R(R)\wedge\Cal R(S))
\rightarrow(\forall\,x,\,\forall\,y,\,\cr
&(\exists\,z,\, y\,R\,z\,\wedge\,z\,S\,x)\leftrightarrow
(\exists\,z,\,z\,S\,x\,\wedge\,y\,R\,z)).\cr}$$
\noindent Put like this it seems obvious. Surely the two things on either side of the $\leftrightarrow$, $\exists\,z,\,y\,R\,z\,\wedge\,z\,S\,x$ and
$\exists\,z,\,z\,S\,x\,\wedge\,y\,R\,z$ are exactly the same? Well, not {\it exactly}; we are relying on the commutativity of $\wedge$, that is, the logical equivalence of $p\wedge q$ and $q\wedge p$. Now we have known for ages that
\Centerline{$(p\wedge q)\leftrightarrow(q\wedge p)$}
\noindent is a tautology, that is, is necessarily true whatever $p$ and $q$ happen to be. But the expression we have in the middle of the last formula is
\Centerline{$(\exists\,z,\,P(z)\wedge Q(z))\leftrightarrow(\exists\,z,\,Q(z)\wedge P(z))$,}
\noindent and we ought to check that this is equally universally true.
What in fact is true is that for any tautology
\Centerline{$\phi(z)\rightarrow\psi(z)$}
\noindent of the predicate calculus, the expression
\Centerline{$(\exists\,z,\,\phi(z))\rightarrow(\exists\,z,\,\psi(z))$}
\noindent is also a tautology. For if $\psi(z)$ is necessarily true, for logical reasons, whenever $\phi(z)$ is true, and if there is some $z$ such that $\phi(z)$ is true, then $\psi(z)$ will be true for that $z$, possibly among others. Similarly, if
\Centerline{$\phi(z)\leftrightarrow\psi(z)$}
\noindent is a tautology, that is, if $\phi(z)$ is true exactly when $\psi(z)$ is true, then there will be some $z$ making one true iff there is a $z$ making the other true, and
\Centerline{$(\exists\,z,\,\phi(z))\leftrightarrow(\exists\,z,\psi(z))$}
\noindent will be a tautology. Applying this with $\phi(z)$ standing for
$y\,R\,z\,\wedge\,z\,S\,x$ and $\psi(z)$ standing for
$z\,S\,x\,\wedge\,y\,R\,z$, we see that the central formula
\Centerline{$(\exists\,z,\,y\,R\,z\,\wedge\,z\,S\,x)\leftrightarrow
(\exists\,z,\,z\,S\,x\,\wedge\,y\,R\,z)$}
\noindent is a tautology. Since $p\rightarrow q$ is true whenever $q$ is true, we can be sure that
\Centerline{$(\Cal R(R)\wedge\Cal R(S))
\rightarrow(\forall\,x,\,\forall\,y,\,
(\exists\,z,\, y\,R\,z\,\wedge\,z\,S\,x)\leftrightarrow
(\exists\,z,\,z\,S\,x\,\wedge\,y\,R\,z))$}
\noindent is also universally true, and therefore that
$$\eqalign{\forall\,R,\,\forall\,S,\,& (\Cal R(R)\wedge\Cal R(S))
\rightarrow(\forall\,x,\,\forall\,y,\,\cr
&(\exists\,z,\, y\,R\,z\,\wedge\,z\,S\,x)\leftrightarrow
(\exists\,z,\,z\,S\,x\,\wedge\,y\,R\,z))\cr}$$
\noindent is universally true, that is, is a theorem.
\bigskip
\filename{cc1126.tex}
\versiondate{26.1.01}
\noindent{\bf Induction again} At last we have a language capable of
expressing the principles of induction. The First Principle is the
argument form:
\smallskip
\hskip100pt\vbox{$P(0)$
$\forall\,n\in\Bbb N$, $P(n)\rightarrow P(n+1)$
$\therefore\,\forall\,n\in\Bbb N$, $P(n)$.}
\smallskip
\noindent While the Second Principle has two quantifiers in it:
\inset{$P(0)$
$\forall\,n\in\Bbb N$,
$\bigl(\forall\,i\in\Bbb N$,
$((0\le i)\wedge(i$ {\tt right}.
In between {\tt left} and {\tt right}, of course, things are moved around. It's perhaps easier to see what the line
\Centerline{\tt qslist[lo]=qslist[hi]; ++lo; qslist[hi]=qslist[lo];}
\noindent does if we think of it as if it were the exactly equivalent code
\Centerline{\tt qslist[lo]=qslist[hi]; qslist[hi]=qslist[lo+1]; ++lo;}
\noindent This shifts the value {\tt qslist[hi]} down to the location {\tt qslist[lo]} and the value {\tt qslist[lo+1]} up to the location {\tt qslist[hi]}, before incrementing {\tt lo}. Now the clever thing about this is that the {\it old} {\tt qslist[lo]} is overwritten and the {\it new} {\tt qslist[lo]} has been copied into {\tt qslist[hi]}. The point is that {\tt qslist[lo]} is treated as a vacant slot. It's not carrying anything we ever look at after the original definition of {\tt pivot} as {\tt qslist[left]}. One way to think of this is to regard {\tt pivot} as the `real' value of {\tt qslist[lo]}, and the actual number stored in the location {\tt qslist[lo]} as accidental junk. If we look at it this way, we can formulate an assertion
\inset{after $n$ executions of the {\tt while} loop, if we replace {\tt qslist[lo]} by {\tt pivot}, then the numbers in the string {\tt qslist[left]}, $\ldots$, {\tt qslist[right]} are exactly what they were when we entered {\tt partition}, but possibly in a different order.}
\noindent Now this is something we can set out to prove by induction on $n$. It's obviously true when $n=0$, because up to the first entry to the {\tt while} loop we don't touch {\tt qslist[]} at all. For the inductive step, we look at what happens as we pass through the {\tt while} loop. It may be that we don't change {\tt qslist[]}, so there's nothing more to check. If we do change {\tt qslist[]}, it's by moving {\tt qslist[hi]} to the old blank space {\tt qslist[lo]}, moving {\tt qslist[lo+1]} to {\tt qslist[hi]}, and incrementing {\tt lo}, so that the loose number {\tt pivot} is correctly pointed at the new blank space. Since on exiting {\tt partition} we put {\tt pivot} back into place in {\tt qslist[lo]}, we can say that
\inset{on exiting {\tt partition} the
numbers in the string {\tt qslist[left]}, $\ldots$, {\tt qslist[right]} are exactly what they were when we entered, but possibly in a different order.}
Oof. Next: in what way is the `new' order an improvement on the `old' order? The target was to ensure that, for {\tt left} $\le i<$ {\tt dp}, {\tt qslist[$i$]} and {\tt qslist[dp]} should be in the right order, that is, that {\tt goodorder(qslist[$i$],qslist[dp],num)} is \ttT; it will make the rest of this argument much simpler if I write $i\preccurlyeq j$ to mean {\tt goodorder($i$,$j$,num)} is \ttT (in the execution I gave of {\tt quicksort}, this just means {\tt num[$i$]} $\le$ {\tt num[$j$]}). We also want to leave {\tt partition} with {\tt qslist[dp]} $\preccurlyeq$ {\tt qslist[$i$]} whenever {\tt dp} $