Logic: tautologies

I never planned on writing this post, and although it’s not complicated (say I who wrote it!), it was time-consuming. It grew out of my confirming that Copi & Cohen and Gensler (both books are titled “Introduction to Logic”) had exactly the same two lists of rules and tautologies. That made it relatively universal.

For what it’s worth, Copi (“Symbolic Logic”) has one different rule — the same total of 9, but one different; and Hummel (“Introductory Concepts for Abstract Mathematics”) has several more which are particularly suited to mathematics. In addition, Gensler also presented the material in another way, as one set of rules for simplifying expressions, and another for making inferences.

(All those books are in the bibliography, and were reviewed in the two most recent “Books Added”, here and here.)

The following can be found in Copi & Cohen, or in Gensler (although I have written them differently from both texts, and in a different order). On my computer, this fits on one screen.

Here are 9 rules of inference.

conjunction. I have to write this differently from all the following — because I’m going to use it implicitly for the compact presentations (and we use implicitly for proofs).
A
B
\therefore A \land B

modus (ponendo) ponens: (A\Rightarrow B)\land A\Rightarrow B

modus (tollendo) tollens: (A\Rightarrow B)\land \neg B\Rightarrow \neg A

disjunctive syllogism (modus tollendo ponens): (A\lor B)\land \neg A\Rightarrow B

hypothetical syllogism (transitivity): (A\Rightarrow B)\land (B\Rightarrow C)\Rightarrow (A\Rightarrow C)

absorption: (A\Rightarrow B)\Rightarrow (A\Rightarrow A\land B)

simplification: A\land B\Rightarrow A

addition: A\Rightarrow A\lor B

constructive dilemma: (A\Rightarrow C)\land (B\Rightarrow D)\land (A\lor B)\Rightarrow C\lor D

Useful tautologies.

In addition, there are 10-16 very useful tautologies, depending on how you count them; again, this list can be found in either Copi & Cohen, or in Gensler. Again, I wrote it somewhat differently from both texts; I’ll explain shortly.

On my computer, the following list fits on one screen.

De Morgan:

\neg (A\land B)\equiv \neg A\lor \neg B

\neg (A\lor B)\equiv \neg A\land \neg B

commutativity of both “and” and “or”.

associativity of both “and” and “or”.

distributivity of “and” over “or” and distributivity of “or” over “and”:

A\land (B\lor C)\equiv (A\land B)\lor (A\land C)

A\lor (B\land C)\equiv (A\lor B)\land (A\lor C)

double negation: \neg \neg A \equiv A

transposition (contrapositive): (A\Rightarrow B)\equiv (\neg B\Rightarrow \neg A)

material implication (note that we have already been using another form of this)

(A\Rightarrow B)\equiv\neg A\lor B

material equivalence:

(A\Rightarrow B)\land (B\Rightarrow A)\equiv (A\equiv B)

(A\land B)\lor (\neg A\land \neg B)\equiv (A\equiv B)

exportation: (A\Rightarrow (B\Rightarrow C))\equiv (A\land B\Rightarrow C)

tautology (yes, that’s a specific name for these two):

A\equiv A\lor A

A\equiv A\land A

Remarks.

Having deliberately displayed these selections compactly, let me expand on them.

conjunction:
A
B
\therefore A \land B

Another way to write that might be with commas to separate the two premises:

A,\ B \Rightarrow A \land B.

But if I were to write conjunction as I have written everything else, it would be

A \land B \Rightarrow A \land B,

which doesn’t convey any meaning to me. It is “conjunction” which let me rewrite A,B as A \land B in the two summaries. And I have chosen to adopt Copi (and Copi & Cohen’s) notation for the expanded presentation.

modus (ponendo) ponens: (A\Rightarrow B)\land A\Rightarrow B

The relevant Latin is: pono: to put, place, set; in speech, to set out as true. A translation is “the method that affirms by affirming”. To be specific, it affirms B by affirming A. It is often referred to by the shortened form “modus ponens”.

Copi & Cohen (and Copi) write it this way.

A
A\supset B
\therefore B

The point is that they are explicitly distinguishing material implication (\supset\ ) from logical implication (\therefore\ ).

Many people do – i.e. they use different symbols for material implication and for logical implication. But not everyone uses different symbols, and not everyone who does is consistent about it. In particular, I have not used different symbols.

Suppes (“Introduction to Logic”, in the bibliography) )makes no distinction. He states modus ponens as A \land (A  \rightarrow B) \rightarrow B\ . That is, like me, he uses the same arrow in one of the premises and in place of \therefore \ . (Yes, he uses a different arrow from me, but that’s irrelevant: he used the same arrow in both places. Oh, and he implicitly uses “conjunction”, having discussed it very early on.)

Gensler, on the other hand, uses two different symbols for material implication (\supset \ ) and for logical implication (\rightarrow\ ).

Hummel, on the third hand, is not consistent. He uses two different symbols when he tabulates his useful tautologies. In particular, he writes modus ponens as A \land (A \rightarrow B) \Rightarrow B\ in his table on p 19.

But when he proves it using truth tables on p. 16, he uses his symbol for material implication. That is, he proves, as I did, that A \land (A \rightarrow B) \rightarrow B\ . (Okay, once again, I used a different arrow from him, but the point is, whatever arrow we chose, we both used it in both places, for material implication and for logical implication.)

So: Hummel agrees with me in practice but not in theory; Suppes agrees with me in both; Copi (and Copi & Cohen), and Gensler disgaree with me. I tell you this so you know I am not alone in choosing the same symbol for material implcation and logical implication.

Finally, I have found no explanation of why we should use different symbols.

The closest I have come to an explanation is the discussion of sample proofs in Copi & Cohen (or Copi).

Recall my compact proof of modus ponens:

\left(\begin{array}{ccccccc} (P & \Rightarrow  & Q) & \land  & P & \Rightarrow  & Q   \\ \text{True} & \text{True} & \text{True} & \text{True}   & \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{False} &   \text{False} & \text{True} & \text{True} &   \text{False} \\ \text{False} & \text{True} & \text{True} &   \text{False} & \text{False} & \text{True} &   \text{True} \\ \text{False} & \text{True} & \text{False} &   \text{False} & \text{False} & \text{True} &   \text{False} \\ 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 1 & 3 & 2 & 4 & 1 & 5 & 1\end{array}\right)

The key was to get column 6 from columns 4 and 7. Hummel and I filled in column 6 by asking, “Does each row of column 4 imply the corresponding row of column 7?” We got all True in column 6, and concluded that material implication held.

What Copi says amounts to this: the truth of our combined premises is in column 4, and only in the first row of column 4 are our premises true. Our conclusion is located in column 7 — and in that first row it has the value “True”. If the conclusion were false when the combined premises were true, then we would have an invalid argument. Instead, we have a valid argument, and he asserts that modus ponens is a valid rule of inference.

But that is exactly the definition of material implication: we never have the conclusion false when the antecedent is true.

The difference between us is that he checks only one row; I checked all the rows. (If there were more than one row in column 4 with the value “True”, then he would have to check it, too.)

At first blush, we might think I have a stronger conclusion than he does, because I checked all 4 rows. But I don’t have a stronger conclusion — I didn’t need to check any row for which column 4 is False, because the implication in column 6 is always true when column 4 is false, regardless of whether column 7 is true or false. Only one case matters. When a row of column 4 is true; then the material implication is valid if and only if the corresponding row of column 7 is also true.

I’m saying three things. One, Copi appears to be using material implication as logical implication — but, two, he doesn’t check all the rows; and, three, he (and I) didn’t need to check all the rows.

Let me say the major point again: Copi (and Copi & Cohen) appear to prove modus ponens by verifying that material implication holds.

And the minor point again: we shouldn’t have bothered to check all the rows because material implication is vacuously true in any row for which the combined premises are false.

This still does not tell me why most people use a different symbol for logical implication — because logical implication looks just like material implication to me — but it does tell me that they think of it a little differently.

So, I suggest that you make a note, that there may be good reason for not using the same symbol — I don’t know of such a reason, but then, I’m new to the logic game. And, of course, feel free to use whatever symbols make you happy.

Let me remark that I find the 1-line presentation with arrows handy for proofs, and for summaries as above, but I like Copi (and Copi & Cohen’s) use of multiple lines and “therefore” (\therefore\ ) when I’m discussing inference, because it so clearly distinquishes the premises and conclusion. It almost eliminates the need for meta-mathematical commentary.

What the heck. Let me say that another way. I can always translate (A\Rightarrow B)\land A\Rightarrow B as “from the premise A implies B, and the premise A, I can conclude B”. But I don’t read it that way by default. On the other hand, I don’t need to translate

A
A\supset B
\therefore B

at all.

(Is that the only reason why Copi (and Copi & Cohen) wrote them out that way: ease of understanding, not a crucial logical distinction? I hesitate over that.)

By the way, there is another way to see that we require modus ponens (and material implication) to hold even when P is false (“vacuously”).

Let me remind you of an ancient example. (Well, I saw it in 8th grade.)

Is the square root of 2 rational?

Let P be: the square root of 2 is rational. Let Q be: it can be written as a fraction in lowest terms (i.e. with no common factors in the denumerator and denominator).

Then modus ponens tells us that if we assume P, then we have Q. For my present purpose I could stop right there: P is false, we know P is false, but we need Q, we need modus ponens to give us Q — from which we will get a contradiction. That is, again, I need modus ponens to be valid, to give me Q, even when P is false.

We wouldn’t try to prove that the square root of 2 is irrational — we would prove that if it can be written in lowest terms, then it can’t be written in lowest terms: from Q we would get \neg Q\ .

It seems a shame to leave you hanging, so let me finish the proof in case you don’t remember how to do it. Write

\sqrt{2} = \frac{m}{n}\ ,

where m and n are integers with no common factors.

Square both sides, getting…

2 = \frac{m^2}{n^2}

then

2 n^2 = m^2\ ,

which says that m^2 is even, which implies m is even (if m were odd, its square would be odd).

Well, if m is even, we can write m = 2 p, which gives us

\sqrt{2} = \frac{2\ p}{n}\ .

Square that…

2 = 4\ \frac{p^2}{n^2}\ ,

so this time we get

n^2 = 2 p^2\ ,

which, by the same argument as for m, says that n is even. But Q said that m and n had no common factors, and we have just shown that they have 2 as a common factor. Since P implies both Q and not-Q, we conclude not-P. (See the end of the post.)

modus (tollendo) tollens:
A\Rightarrow B
\neg B
\therefore \neg A

The relevant Latin is: tollo: to lift up, raise, elevate; but in speech, to take away. (I think of pono and tollo as connoting, in this case, to lay something on a table and to take something off a table, respectively.) This, then, is “the method that denies by denying” — that is, it denies A by denying B.

disjunctive syllogism (modus tollendo ponens):
A\lor B
\neg A
\Rightarrow B

Okay, this is the one whose name we see very rarely, and it uses both Latin verbs. The Latin translates as “the method that asserts by denying”; and, indeed, it asserts B by denying A.

hypothetical syllogism (transitivity):
A\Rightarrow B
B\Rightarrow C
\therefore A\Rightarrow C

Logicians may call it the hypothetical syllogism, but mathematicians would almost certainly call it transitivity, and this mathematician wonders why it would be called anything else.

simplification:
A\land B
\therefore A

This is as fundamental as modus ponens, and just as rarely do we say it explicitly in a usual write-up of a proof.

addition:
A
\therefore A\lor B

I think I want to find a proof that uses this. Yes, it’s obvious — aren’t they all?

absorption:
A\Rightarrow B
\therefore A\Rightarrow A\land B

This is an interesting complement to “addition”, and it looks related to “exportation” below.

constructive dilemma:
(A\Rightarrow C)\land (B\Rightarrow D)
A\lor B
\therefore C\lor D

Perhaps mistakenly, I take this as an illustrative rule rather than an essential one. I suspect one might prove similar rules as needed.

De Morgan:

\neg (A\land B)\equiv \neg A\lor \neg B

\neg (A\lor B)\equiv \neg A\land \neg B

These tell us how remove parentheses after a negation: apply the negation to the individual symbols, and replace either “and” or “or” by the other.

commutativity of both “and” and “or”.

Just to be explicit, let me write them out:

(A \land B) \equiv (B \land A)

(A \lor B) \equiv (B \lor A)

associativity of both “and” and “or”.

Again, just to be explicit, let me write them out:

(A \land (B \land C)) \equiv ((A \land B) \land C)

(A \lor (B \lor C)) \equiv ((A \lor B) \lor C)

distributivity of “and” over “or” and distributivity of “or” over “and”:

A\land (B\lor C)\equiv (A\land B)\lor (A\land C)

A\lor (B\land C)\equiv (A\lor B)\land (A\lor C)

I would say that the crux is that there are two distributive laws. (I know of a logic book that only mentions one of them!)

double negation: \neg \neg A \equiv A

“No comment”, but I didn’t want you counting things up and wondering which rule I left out of the remarks.

transposition (contrapositive): (A\Rightarrow B)\equiv (\neg B\Rightarrow \neg A)

This should not be confused with what we call a proof by contradiction. This is the kind of proof that usually begins, “Suppose the conclusion is false….”

My proof that the square root of two is irrational actually involved elements of both. If the theorem is: “it’s irrational”, then the statement “suppose it is rational” is the beginning of a proof by contraposition — but by concluding that it cannot be rational, I have completed a proof by contradiction. That is, I have used proof by contradiction to prove the contraspositive of the theorem. (No wonder people mix up these terms)

material implication

(A\Rightarrow B)\equiv\neg A\lor B

That is nothing more than a De Morgan’s law applied to the effective definition of material implication. That is, I think of material implication as “we never have B false and A true”, which is

\neg (A \land \neg B)\ .

Expanding that by De Morgan’s Law gives us precisely

\neg A \lor B\ .

material equivalence:

(A\Rightarrow B)\land (B\Rightarrow A)\equiv (A\equiv B)

We didn’t have to see very many proofs to realize that this is how we prove that two things are equivalent. The first time we see a “circular” proof that, for example, four things are equivalent, it may be a bit of a shock. That it, a theorem saying that the following four things are equivalent may be proved by showing (1) implies (2), (2) implies (3), (3) implies (4), and finally, (4) implies (1). We would have been using this form of material equivalence, and transitivity.

Then there is another form:

(A\land B)\lor (\neg A\land \neg B)\equiv (A\equiv B)

That is a form that I think I don’t see very often. I’m glad to have seen it explicitly.

exportation: (A\Rightarrow (B\Rightarrow C))\equiv (A\land B\Rightarrow C)\ .

You may find this written as two implications rather than as an equivalence, and in that case, one of the implications in called “importation”:

importation: (A\Rightarrow (B\Rightarrow C))\Rightarrow (A\land B\Rightarrow C)\ .

It intrigues me that this is true. The LHS has nested implications, but the RHS puts A and B on an equal footing. No, it doesn’t really seem to have anything to do with “absorption”, beyond involving the same two symbols….

tautology (yes, that’s a specific name for these two):

A\equiv A\lor A

A\equiv A\land A

Again, I’ll be keeping my eyes open for proofs that use either of these.

Additions

About the only ones that jump out of other lists are:

Related to Aristotle…

Non-contradiction “not both A and not-A”: \neg(A \land \neg A)

Excluded middle “A or not-A: A \lor \neg A

and those are equivalent by De Morgan (and commutativity).

Related to proofs by contradiction… an explicit statement:

Law of Absurdity:
A \Rightarrow (B \land \neg B)
\therefore \neg A\ ,

which is what we did to show that the square root of two is not rational: if it is, then it can be written in lowest terms and it cannot be written in lowest terms.

Advertisements
Posted in logic. 9 Comments »

9 Responses to “Logic: tautologies”

  1. rip Says:

    Gracias. I expect to put out more logic points, but not immediately.

  2. Randall Holmes Says:

    There are no universal lists of rules of logic of this kind. I believe this particular popular list goes back to IM Copi; it is popular and that is all there is to it.

    There is a presentation of propositional logic due to Nicod with one axiom and one rule of inference. No one in their right mind would use it.

    There are other presentations; I am a professional mathematical logician and I had never seen this particular list; however I was alerted to it because two different students in my discrete math class had been told in quite different classes that this was *the* list of 18 rules of deduction (your version might have a different number of items, I have not checked).

    Typical presentations in mathematical logic texts would look *totally* different, and might look *totally* different from one text to another.

    • rip Says:

      Thank you, Randall. I appreciate it when experts show up and clarify things.

      I began this post by saying that I was presenting a list common to two texts, and different from two others.

      And I’m glad to hear you say this list is perfectly adequate. I am not a professional mathematical logician; I’m just hoping that my understanding isn’t wrong.

  3. Randall Holmes Says:

    I should add that this list is perfectly adequate: it is a complete set of rules for propositional logic. But it is not in any sense at all *the* set of rules for propositional logic.

  4. Randall Holmes Says:

    There is a profound difference between material and logical implication, and I’m not sure I can do it justice here.

    Here is a hint:

    A -> B (-> being material implication) is a statement, which is false if A is true and B is false and true otherwise.

    A |- B is just flat invalid. You cannot deduce B from A.

    There is a close relationship between -> and |-

    If Gamma and Delta are complicated expressions built up from propositional letters
    using propositional logic operations,

    then Gamma -> Delta is another somewhat more complication expression of propositional logic

    whereas Gamma |- Delta says that
    Gamma -> Delta is true for any assigment of truth values to the letters appearing in Gamma and Delta whatsoever

    If you are trying to verify that Gamma -> Delta is a tautology using truth tables, you are doing exactly the same thing as verifying Gamma |- Delta; this is one reason the two concepts are confused

    But if Gamma -> Delta is not a tautology, then
    Gamma -> Delta is a statement which may or may not be true, while Gamma |- Delta is simply invalid (we do not say “false” because it is not really a statement of the same language as Gamma-> Delta).

    • rip Says:

      Thanks for the hint. It will inform my future study, and may help my readers. It appears that I did not appreciate the importance of the distinction between tautologies and non-tautologies. I will need to look at this some more, but if I understand you correctly…

      Gamma |- Delta is either valid or invalid, and

      Gamma -> Delta is either a tautology or not.

      Then the validity of Gamma |- Delta means that Gamma -> Delta is a tautology. But if Gamma -> Delta is not a tautology, then Gamma -> Delta may be true or false (or indeterminate?)

  5. lindencrusade Says:

    Great article, you have a lot of great writing and illustrative skills. You should write textbooks.
    DOn’t worry about the formal symbolic language. After reading enough of Leibniz and his philosophy, there is no formal mathematics that is a consistent language. In the sense that it is some religious truth in a symbolic form. Symbols are the wind for a wind mill. Said by some guy.
    I think your demonstrative style is the key to all eucalculi. Or good math.
    Many books called nonstandard or non-formal analysis are much more enjoyable and useful than the contemporary analysis. The latter is like a babylonian religion.

    Try to develop more demonstrations. I think they are very good and enjoyable math literature.
    The sum of triangular numbers is always a square number, So half a square number is always a triangular number.
    So the pentagonal numbers are?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: