I am currently self-studying mathematical logic, in particular the rules of inference for natural deduction, and am having trouble understanding the restrictions around universal generalization (or \forall-elimination). In the textbook that I am reading (Mathematical Logic by Chiswell and Hodges), they state the rule as follows:

\frac {\phi[t/x]} {\forall x \phi}

The restrictions on using this rule are:

  1. The term t cannot be present in any undischarged assumptions of the derivation of \phi[t/x]
  2. The term t cannot be present in \phi


The first of these restrictions make perfect sense, but I don't understand the second. With my current understanding, it doesn't seem necessary because it can never happen. When moving from the premise to the conclusion in this rule, all terms t are removed from \phi[t/x] and replaced with x to create \phi. So, if \phi cannot have t in it, why do we need restriction 2? For example:

\frac {P(t) \wedge Q(s)} {\forall x (P(x) \wedge Q(s))}

The formula \phi = P(x) \wedge Q(s) can never contain t, because all instances are removed in the actual process of asserting the rule!

I have asked this question on other forums and no-one has provided an example of a natural deduction argument that creates a false conclusion by using universal generalization that breaches restriction 2. It seems clear to me that there is something in my understanding is faulty. If someone can help that would be great!