You can not claim that because a square root of is defined to be the positive number such that -- so in fact, is not defined.
If (a ^b)^c = a^(bc) for all real numbers a, b, and c, you can prove that -5 = 5.
-5 = (-5)^1
= ((-5)^2) ^(1/2) so that -5 = |-5|. But something obviously went wrong. The move from (-5)^(2/2) to ((-5)^2)^(1/2) did not preserve the equality. My question is, why do many text books contain the above rule when it allows for this type of deduction? For example, see rule 3 in http://math.uww.edu/~mcfarlat/141/exponent.htm where all numbers "must be reals" . You can see in that list that rule 13 fails if you let a be -5. Also, i'd like to know if anyone has an improved (more rigorous) version or explanation of the rule.
I see that, but that was my point. The popular rule that i put at the top of the post doesn't rule out any step of my deduction. My mistake (or counterexample to the rule) was to choose a as a negative number. I guess I wanted to know why that's not explicitly precluded in the canonical exponent rules. Is the idea that the reals are not closed under square roots (or any even roots), so that the rule only holds for nonneg choices of a?