I am currently self-studying mathematical logic, in particular the rules of inference for natural deduction, and am having trouble understanding the restrictions around universal generalization (or $\displaystyle \forall$-elimination). In the textbook that I am reading (Mathematical Logic by Chiswell and Hodges), they state the rule as follows:

$\displaystyle \frac {\phi[t/x]} {\forall x \phi}$

The restrictions on using this rule are:

  1. The term $\displaystyle t$ cannot be present in any undischarged assumptions of the derivation of $\displaystyle \phi[t/x]$
  2. The term $\displaystyle t$ cannot be present in $\displaystyle \phi$

The first of these restrictions make perfect sense, but I don't understand the second. With my current understanding, it doesn't seem necessary because it can never happen. When moving from the premise to the conclusion in this rule, all terms $\displaystyle t$ are removed from $\displaystyle \phi[t/x]$ and replaced with x to create $\displaystyle \phi$. So, if $\displaystyle \phi$ cannot have $\displaystyle t$ in it, why do we need restriction 2? For example:

$\displaystyle \frac {P(t) \wedge Q(s)} {\forall x (P(x) \wedge Q(s))}$

The formula $\displaystyle \phi = P(x) \wedge Q(s)$ can never contain $\displaystyle t$, because all instances are removed in the actual process of asserting the rule!

I have asked this question on other forums and no-one has provided an example of a natural deduction argument that creates a false conclusion by using universal generalization that breaches restriction 2. It seems clear to me that there is something in my understanding is faulty. If someone can help that would be great!