# Thread: Rules of Inference Proof Help

1. ## Rules of Inference Proof Help

Prove the following argument is valid.

If Ralph doesn't do his homework or he doesn't feel sick, then he will go to the party and he will stay up late. If he goes to the party, he will eat too much. He didn't eat too much. So Ralph did his homework.

So far I came up with the following:
Let's Set:
H(x): Ralph does his homework
S(x): Ralph feels sick
P(x): Ralph goes to the party
L(x): Ralph stays up late
E(x): Ralph eats too much

Argument:
(¬H(x) v ¬S(x)) → (P(x) ∧ L(x))
P(x) → E(x)
¬E(x)
H(x)

Not positive the above is right or not. But either way - I don't understand how to do the proofs. We can use the following Rules of Inference:
Modus Ponens
Modus Tollens
Hypothetical Syllogism
Simplification
Conjunction
Disjunctive Syllogism

Help would be much appreciated.

2. First, you don't have to work in predicate calculus, i.e., there is no need to have the argument x. There is only one person in this problem, and different statements about him can be considered propositions.

Your translation of English into formulas seems correct.

In fact, I am not sure H is derivable from the rules you listed. The following explanation probably goes far beyond the scope of the course you are taking, but here it is anyway. The reason I am saying that H is not derivable is that there is a so-called constructive logic where you can prove strictly less than in regular logic. In particular, ¬¬P → P is not derivable in constructive logic. All the rules that you listed are valid in constructive logic. But the only way I see to derive H is by showing the negation of the premise of the first formula ¬(¬H v ¬S), which would imply H ∧ S in regular logic and therefore H. However, I believe that in constructive logic one can only derive ¬¬H ∧ ¬¬S and so ¬¬H. So I don't see a way to prove H.

However, this is probably a minor point and we can assume that we have the rule that derives P from ¬¬P.

The first step in constructing a derivation is to convince yourself that the conclusion indeed follows from the premises in a common-sense way, without thinking about particular inference rules. This is also probably the most important part because you may forget the inference rules used in this course, but the ability to think logically will often help you in life in general and especially if your work is related to exact sciences. By common-sense way I don't mean that the reasoning should depend on the particular plot of this problem. On the contrary, it is better to forget what propositional letters stand for and think just in terms of letters themselves.

I suggest you do this and post some informal reasoning of why H is true.

To convert the informal inference into a formal one may sometimes be tricky because the inference rules were specifically chosen to be few in number, so you have to reduce a large amount of logic to those few rules. I have not worked with these particular rules and I don't know if there is any general method, but probably you just need to look at what rule is applicable or will get you closer to the goal. For example, the only rule in the list that derives a negation is Modus Tollens, so this is a good rule to try in order to derive a negation.

3. Originally Posted by kturf
Prove the following argument is valid.

If Ralph doesn't do his homework or he doesn't feel sick, then he will go to the party and he will stay up late. If he goes to the party, he will eat too much. He didn't eat too much. So Ralph did his homework.

So far I came up with the following:
Let's Set:
H(x): Ralph does his homework
S(x): Ralph feels sick
P(x): Ralph goes to the party
L(x): Ralph stays up late
E(x): Ralph eats too much

Argument:
(¬H(x) v ¬S(x)) → (P(x) ∧ L(x))
P(x) → E(x)
¬E(x)
H(x)

Not positive the above is right or not. But either way - I don't understand how to do the proofs. We can use the following Rules of Inference:
Modus Ponens
Modus Tollens
Hypothetical Syllogism
Simplification
Conjunction
Disjunctive Syllogism

Help would be much appreciated.

The proof is a proof in propositional calculus and is as follows:

proof:

1) $(\neg H\vee\neg S)\Longrightarrow(P\wedge L)$.................................................. .......................given

2) P => E................................................. ............................................given

3) $\neg E$.................................................. ............given

4) $\neg P$.................................................. ......................from (2) and (3) and using M.Tollens

5) $\neg P\vee\neg L$.................................................. .........from (4) and using disjunction introduction : p => p v q

6) $\neg(P\wedge L)$.................................................. ..............from (5) and using De Morgan

7) $\neg(\neg H\vee\neg S)$.................................................. .............from (6) and (1) and using M.Tollens

8) $\neg\neg H\wedge\neg\neg S$.................................................. ..............from (7) and using De Morgan

9) $\neg\neg H$.................................................. ........................from (8) and using conjunction elimination

10) H .................................................. .............................................from (9) and using the law of double negation

4. Perhaps i forgot to add the following very important fact .

When asked in propositional calculus to prove a formula or statement ,after a few trials we may wonder whether the problem is provable or not.

Well , there is a central theorem in propositional calculus that answers that question.

It says that:

A formula of the propositional calculus is provable iff it is a tautology.

Hence if you form a true table and see that the formula is a tautology then your formula is provable.

In our case the problem has 5 variables and the formation of the true table would be quite a task.

Thus to find out if the above forms a tautology we can reason as follows:

Our formula is a tautology ,if true hypothesis imply true conclusion.

So suppose all our hypothesis are all true i.e;

$(\neg H\vee\neg S)\Longrightarrow (P\wedge L)$ is true

P => E is true

$\neg E$ is true.

Since P => E is equivalent to $\neg E\Longrightarrow\neg p$

if $\neg E$ is true then for $\neg E\Longrightarrow\neg p$ to be true we must have ( by the definition of $\Longrightarrow$) $\neg P$ true and hence P false.

True can only imply true.

If P is false then by the definition of " $\wedge$" $P\wedge L$ must be false.

BUT for $(\neg H\vee\neg S)\Longrightarrow(P\wedge L)$ to be true if $P\wedge L$ is false ,again by the definition of the conditional, $\neg H\vee\neg S$ must be false.

Or by De Morgan $\neg(H\wedge S)$ false and thus $H\wedge S$ true and consequently H TRUE.

Hence the formula is a tautology and therefor provable

5. You are absolutely right. If you need to prove that an argument is valid, you can either reason semantically, as you did in the previous post, or construct a derivation using inference rules. Even if you are asked to prove that some formula is derivable, you can still reason semantically (using the truth table) and then invoke the completeness theorem: if it's valid, then it's derivable. It's only when you are explicitly asked to come up with a formal derivation that truth tables are not of much help. (Even this is not completely true, but let's not go there.)

I am sorry that I went into hairsplitting by arguing that the rule of double negation was necessary -- this was not the point of the exercise.

However, I am curious about this central theorem of propositional calculus. To talk about provability one has to formulate very precisely what this means, in particular, what a derivation is and what inference rules are involved. There have been many questions here about provability, and I could not really be helpful because I don't know what inference rules people use. For example, Wikipedia has a List of rules of inference with a reference to Rosen, 5th edition. However, it is very redundant, while a short summary on the same page is incomplete. Could anyone post or give an online reference to a precise, complete and preferably non-redundant list of inference rules?

6. The central theorem of propositional calculus which is:

A formula is a theorem of the statement calculus if and only if it is a tautology. ( A metatheorem)

Can be found on page 69 of the book :FIRST ORDER MATHEMATICAL LOGIC,by Angelo Margaris.

As you mention this theorem implies completeness .

At the same time decidability and consistency

I changed the above theorem by using the word provable instead of " is a theorem".

Or i should say derivable instead of provable.

Anyway apart from the three basic axioms in statement calculus ,as the theory develops and get away from the basic axioms ,we cannot avoid redundancy between some of the theorems ,it happens with every axiomatic theory.

For example in the above proof i used De Morgan ,a law that can be proved by using other laws .

Finally it would be an interesting exercises to prove validity of an argument without using symbolic logic.

After all how did the ancient Greeks evaluated the arguments ,since symbolic logic was discovered after long time.

Thanks for taking interest in my posts