1. ## Propositional Logic

No shoes, no shirt, no service.

p = service
q = shoe
r = shirt

$(\lnot{q}\wedge\lnot{r})\to\lnot{p}$ is wrong and $(\lnot{q}\vee\lnot{r})\to\lnot{p}\equiv \lnot{(q\wedge r)}\to \lnot{p}$ is correct. But what's wrong with the first one?

Doesn't it say NO shoes AND NO shirt then NO service?
The second one says NO (shoes AND shirt) then NO service. What is the difference?

Also, what's wrong with this?

2. For your first problem, I would say that the store means by the statement, "No shoes, no shirt, no service", that if you are lacking either shoes or a shirt, you will get no service.

For the second problem, I don't think you're using OR elimination correctly in line 8. (Nice formatting, by the way!). OR elimination works like this. If you want to use OR elimination, I would suggest you set up a p or not p, using the Law of the Excluded Middle (or, it's possible to derive it if your deduction system is robust enough with proofs by contradiction). The line 1 that you reference for the OR elimination isn't a disjunction!

3. Does this work?

4. When you close the assumptions you get two statements

$p \rightarrow \neg p \vee q$
$\neg q \rightarrow \neg p \vee q$

I forget the rules involving conditional expressions, but I'm pretty sure you can use both of those to achieve the conclusion you want. I have no idea how you're justifying your last step in the proof.

And I would agree that the "no shoes, no shirt, no service" is an inclusive OR statement. If it were AND then one could go in with no shirt and get service still (or with no shoes). To get no service requires one to have BOTH no shoes AND no shirt to get refused service. The implication is that if you come in with either, you will get no service. This also coincides with the $\neg (p \Wedge q)$ interpretation, because it's saying "if you don't have (shoes and shirt), then you get (no service)." To reiterate, that's the same as how people should interpret the statement: "if you have (no shoes) or (no shirt), then you get (no service)."

5. Originally Posted by Makall
Does this work?
This works, although OR introduction only needs to reference one line above it, not two. So your line 5 need only reference line 4, and your line 7 need only reference line 6.

6. Originally Posted by Ackbeet
OR introduction only needs to reference one line above it
I believe those disjunction introductions are subscript by the type of introduction they are: 1 for left-introduction (e.g., ¬p to q in step 7) and 2 for right-introduction (e.g., q to ¬p in step 5). They are one-line references. I just don't get how 2, [3,5], and [6, 7] justify line 8. It seems like a rule that is biting off a lot in one step.

What system of logic are these rules derived from? Maybe I'm wrong, but that disjunction elimination rule doesn't seem to be part of any natural deduction system I've seen.

7. I just don't get how 2, [3,5], and [6, 7] justify line 8. It seems like a rule that is biting off a lot in one step.
The natural deduction rule for disjunction elimination has the following form.

$\frac{A\lor B\qquad\overset{\overset{\textstyle A}{\raisebox{2pt}[2.6ex][0pt]{\vdots}}}{C}\qquad \overset{\overset{\textstyle B}{\raisebox{2pt}[2.6ex][0pt]{\vdots}}}{C}}{C}$

The number 2 specifies the first premise, and [3,5] and [6,7] specify the subderivations that end in the second and third premises.

8. I checked my textbook this morning that has a list of the rules, and I see how it works. Though, the book lists that it requires the extra step I mentioned--i.e., to close the assumptions with conditionals. Then we get the argument form

$A \lor B$
$A \rightarrow C$
$B \rightarrow C$
$\vdash C$

This formulation doesn't use ranges of justification, which threw me off. The ideas and necessary steps are pretty much the same, clearly, but I think this specification is more intuitive since it requires specifying what we conclude from the assumption blocks.

9. Originally Posted by bryangoodrich
I checked my textbook this morning that has a list of the rules, and I see how it works. Though, the book lists that it requires the extra step I mentioned--i.e., to close the assumptions with conditionals. Then we get the argument form

$A \lor B$
$A \rightarrow C$
$B \rightarrow C$
$\vdash C$

This formulation doesn't use ranges of justification, which threw me off. The ideas and necessary steps are pretty much the same, clearly, but I think this specification is more intuitive since it requires specifying what we conclude from the assumption blocks.
In that case, you could just insert two more lines: the conditional introduction after the (existing) lines 5 and 7.

10. The version where the last two premises of \/E are implications is equivalent to the one I gave. I agree that it may be simpler: a derivation looks more like a tree of formulas.

One technical reason that comes to mind for preferring the version with subderivations is the subformula property. In constructive natural deduction, every formula in a normal derivation is either a subformula of the conclusion or of an open assumption. Here a derivation is normal, roughly speaking, if it does not use lemmas, i.e., one does not first prove an implication A -> B and then immediately apply it to A. In a normal derivation, a proof of such lemma is supposed to be inlined into the main proof. Every derivation can be converted into a normal one.

Subformula property is neat, and it allows proving that something is not derivable without invoking models. For example, if p \/ ~p were provable in constructive logic for a propositional variable p, then it had to be derived either from p or ~p, which in turn could only be derived from p, a contradiction.

The version of \/E with implications violates the subformula property in the following trivial example:

$\frac{A\lor A\qquad A\to A\qquad A\to A}{A}$

since the only open assumption is A \/ A.

11. Originally Posted by emakarov
The version where the last two premises of \/E are implications is equivalent to the one I gave. I agree that it may be simpler: a derivation looks more like a tree of formulas.
That's why I find it more intuitive, but that could be bias in the way I learned logic.

One technical reason that comes to mind for preferring the version with subderivations is the subformula property. In constructive natural deduction, every formula in a normal derivation is either a subformula of the conclusion or of an open assumption.
I'm unfamiliar with with "normal derivatoin" (and most of the details of constructive logic). I found a paper that does offer some details on this topic, but I'll have to read it later. I gather from what my quick search indicated is that a derivation is normal if every formula in the proof is either a subformula of the conclusion or one of the hypotheses. Is this definition restricted to constructive logic or just important to constructive logic?

Every derivation can be converted into a normal one.
That makes sense.

Subformula property is neat, and it allows proving that something is not derivable without invoking models.
Do you say this restricted to the constructive context? I would think classically we can prove something is not derivable without invoking models, no?

For example, if p \/ ~p were provable in constructive logic for a propositional variable p, then it had to be derived either from p or ~p, which in turn could only be derived from p, a contradiction.
If I understand this correctly, it is because the only subformulas of p v ¬p are p and ¬p, and their subformulas only consist of p, each. Where exactly is the contradiction? That we cannot derive ¬p from p alone?

12. This is getting pretty far from the original question of the thread, so I rely on moderators to tell us to stop, if necessary.

Originally Posted by bryangoodrich
I gather from what my quick search indicated is that a derivation is normal if every formula in the proof is either a subformula of the conclusion or one of the hypotheses. Is this definition restricted to constructive logic or just important to constructive logic?
No, this is a theorem that holds for normal derivations, i.e., this is a necessary property. The following somewhat artificial derivation has the subformula property but is not normal.

$\dfrac{\dfrac{\raisebox{\depth}{\dfrac{\dfrac{\overset{(1)}{p}}{p\lor&space;q}}{p\to&space;p\lor&space;q}{\scriptstyle({\to}I)\;(1)}}\qquad&space;\raisebox{\depth}{\overset{(2)}{p}}}{p\lor&space;q}{\scriptstyle({\to}E)}}{p\to&space;p\lor&space;q}{\scriptstyle({\to}I)\;(2)}$

This is because it has implication introduction immediately followed by implication elimination. This derivation can be converted to

$\dfrac{\dfrac{\overset{(2)}{p}}{p\lor q}}{p\to p\lor q}{\scriptstyle({\to}I)\;(2)}$

A derivation is normal when no conversions are applicable. So, the definition of normality depends on the conversions that can be applied to derivations. The simplest one is removing ->I followed by ->E, as above, by replacing the open assumption of ->I with the second premise of ->E. This corresponds to inlining a lemma.

In constructive logic, all inference rules are divided into matching pairs of introduction and elimination. Therefore, a conversion when an introduction rule is followed by elimination one is easy to define. Classical logic adds the rule of double negation elimination (DNE), and its associated conversion is not obvious. One option is the following.

$\frac{\overset{\overset{\displaystyle&space;\neg&space;A}{\raisebox{3pt}[3ex][0pt]{\displaystyle\mathcal{D}_1}}}{\bot}}{\overset{\overset{\displaystyle&space;A}{\raisebox{3pt}[3ex][0pt]{\displaystyle\mathcal{D}_2}}}{\bot}}(\mathit{DNE})\qquad\leadsto\qquad\frac{\overset{\overset{\displaystyle&space;A}{\raisebox{3pt}[3ex][0pt]{\displaystyle\mathcal{D}_2}}}{\bot}}{\overset{\overset{\displaystyle&space;\neg&space;A}{\raisebox{3pt}[3ex][0pt]{\displaystyle\mathcal{D}_1}}}{\bot}}({\to}I)$

(Here $\mathcal{D}_1$ and $\mathcal{D}_2$ denote subderivations and $\neg A$ is considered an abbreviation for $A\to\bot$, so it can be derived by ->I.) But this only works when the conclusion of the derivation is $\bot$ (which, in fact, is less useless than it seems at first).

Subformula property is neat, and it allows proving that something is not derivable without invoking models.

Do you say this restricted to the constructive context? I would think classically we can prove something is not derivable without invoking models, no?
There is an analog of subformula property for classical logic as well, but I don't remember the details. See "Basic proof theory" by Troelstra and Schwichtenberg.

There are various methods for proving that something is not derivable, but, I think, constructing countermodels is by far the most popular, probably followed by subformula property of normal derivations and its analogs for other proof styles.

For example, if p \/ ~p were provable in constructive logic for a propositional variable p, then it had to be derived either from p or ~p, which in turn could only be derived from p, a contradiction.

If I understand this correctly, it is because the only subformulas of p v ¬p are p and ¬p, and their subformulas only consist of p, each. Where exactly is the contradiction? That we cannot derive ¬p from p alone?
We can't derive p with no open assumptions because p is a propositional variable (and thus not a tautology). Actually, Jeremy Avigad has a more thorough proof in the article you found.

A remarkable thing is that under proofs-as-programs correspondence, well, proofs are considered as programs in a functional language, similar to ML or Haskell. The conversion above, viewed as a program transformation, is a function call, where a formal argument is replaced by the actual argument.

This is where the magic is just starting because the rule for DNE above corresponds to a jump similar to goto!

The fact that every derivation can be normalized has a profound significance. One distinguishes weak normalization (there exists a terminating sequence of conversions) and a strong one (every sequence of conversions is terminating). The fact that every derivation is strongly normalizing corresponds to the fact that one cannot write an infinite loop in Haskell without explicitly using the recursion facility. It is much more important for the theorem prover Coq because it ensures that the typechecker can reduce any two terms to normal form and compare them to see if they are, in fact, the same term.

The proof of strong normalization is quite beautiful and quickly becomes nontrivial as the expressiveness of the logic grows (propositional, first-order, second order, etc.) What is even more interesting is that strong normalization of a logic implies its consistency (the impossibility to derive falsehood) by the subformula property. Since consistency is not provable in the logic itself by Gödel's theorem, it follows that strong normalization of second-order arithmetic (SOA) is not provable in SOA. This is probably the only theorem most people will see in their lives (if they do) that is not formalizable in SOA. For comparison, almost all everyday mathematics, such as calculus and differential equations, can be rendered in SOA.

Some natural statements about normalization are still unproved. For example, it is not known if every logic (rather, type system) that has weak normalization also has strong one (there are some type systems that have neither).

13. This is beating a dead horse.