# Thread: "forbidden notations", are they forbidden, and if so, why ?

1. ## "forbidden notations", are they forbidden, and if so, why ?

Hello,

I just registered to this forum to ask a question I couldn't get answered elsewhere.
Let me sketch the situation: I'm a dad, I'm an experienced physicist with quite some affinity for mathematics, but I was baffled recently by something that happened to my son, and I wanted to find out whether there was a fundamental piece of elementary knowledge about maths that I succeeded in missing for several decades, or whether it is something cultural, or whether the situation is simply weird.

I went to a teacher's forum in the country, but essentially I got insulted away from it, because I dared questioning a practice, and I don't accept an argument of authority as an explanation - but I'm totally open to any argument that holds water.

The question is this: "is there a good reason to sanction the writing down symbolically, of expressions, that turn out not to correspond to mathematically existing objects, during the phase of exploration of, exactly, their existence ? Or is this a generalized cultural thing in mathematics ? Or is this just a local quirk ?"

Level: towards end of high school.

Let me explain. My son had to find out whether a given function had a derivative in a given point a.

He started out writing: f'(a) = lim_{h->0} (f(a+h) - f(a))/h = ..... to find the value of the limit, concluding that given that the limit exists and was equal to something, f was derivable in a, and its derivative was equal to the number he found.

This was barred. He had to first work out (f(a+h) - f(a))/h, show that this function existed, only then be allowed to write lim before it, and in the end, conclude that given that the limit existed, that f was derivable, and write down f'(a).

Now, I have no souvenirs of ever having done things "backwards" that way. It seems to be forbidden, to write f'(a) if one isn't yet sure that it exists. But that is, to me, a strange thing to do, if the question to be answered is, exactly, to find out whether it exists.

I don't see the problem in writing f'(a) = lim .... ; to eventually conclude that, if ever the limit doesn't exist, f'(a) doesn't exist, or, as was the case, if the limit exists, that this is the f'(a) looked for. I always did so.

Of course, as long as an object isn't proved to be existing, one cannot USE it, but I didn't know it was FORBIDDEN to write it down. Simply, because if it is forbidden to even be written down, how do you even write down the question symbolically ?

So, is this generalized practice in maths to forbid writing down the symbolic expression of objects that might not exist (NOT calculate with them !) in the phase of exploration of their existence ?

2. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Hey Zoroaster.

I'd recommend just using the standard language mathematicians use to write down definitions.

Any good set of introductory books on mathematics should elucidate this.

3. ## Re: "forbidden notations", are they forbidden, and if so, why ?

It doesn't sound like it has been forbidden so much as that the teacher is trying to ensure that he understands the process. Also, if you get into the habit of using bad notation, others find it difficult to understand what you mean and, even worse, so might you when you look at it several weeks or months later.
$$f'(a) = \lim_{h \to 0} \frac{f(a+h) - f(a)}{h} \quad \text{if the limit exists}$$
is OK. But you still then have to demonstrate that the limit exists and the final line will still be $f'(a)=c$ (if the limit exists).

4. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Originally Posted by Archie
It doesn't sound like it has been forbidden so much as that the teacher is trying to ensure that he understands the process. Also, if you get into the habit of using bad notation, others find it difficult to understand what you mean and, even worse, so might you when you look at it several weeks or months later.
$$f'(a) = \lim_{h \to 0} \frac{f(a+h) - f(a)}{h} \quad \text{if the limit exists}$$
is OK. But you still then have to demonstrate that the limit exists and the final line will still be $f'(a)=c$ (if the limit exists).
Ah. So just to get this right, is the following "allowed" or not (because these are things I used to write down without problems...), just to get the gist of it ?

Imagine that the problem is the following:

"let the function f be specified by f(x) = (x^2 + 5)/(x-6). Is 6 part of the domain of f ?"

The way I'm used to tackle this problem, is as follows:

"Let us suppose that 6 is part of the domain of f. In that case, f(6) = (6^2 + 5)/(6-6) = 41/0. As 41/0 is not a (real) number (because 0 doesn't belong to the domain of the multiplicative inverse in the field R,+,*), f(6) doesn't exist, and hence, 6 cannot be part of the domain of f".

I know that the "clean" way to do so, is: "the denominator of f(x) is x-6. As this denominator becomes 0 when x is 6, one cannot evaluate the expression for f(x) at x = 6, so 6 cannot be member of the domain of f". But when you think of it, this is, concerning the argument, exactly the same, and in fact, it is because one "would find but doesn't dare" to write 41/0, that one goes on considering the denominator. So my question is: is it generalized practice not to allow for these "ugly" expressions, even if they are in fact, the core of the argument ?

5. ## Re: "forbidden notations", are they forbidden, and if so, why ?

I think it depends on who is writing, their intended audience and purpose and also who is reading. I would ignore minor notational errors or shortcuts from colleagues at the university in a conversation because I know that they do understand the concept. But I wouldn't if they wrote it in teaching materials, an article for publication because the first needs to be accurate to avoid confusion while the second must be written formally. I wouldn't accept it from a student because of the need to ensure that there is no misunderstanding on their part.

In your example, the error is not in evaluating $\displaystyle f(6)$ because you stated an assumption that it is within the domain. Rather, the error is in failing to state that the assumption leads to a contradiction and is thus false.

In essence, mathematical notation has rules just as natural language has rules. And the rules serve the same function: to clearly express meaning. The student (or, indeed, anyone) should always be looking for ways to express themselves clearly and accuracy. Deliberate failure to adhere to the rules of expression in mathematics is akin to refusing to use tenses properly or to spell correctly in English. One appears ill-educated and will cause confusion and misunderstanding.

6. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Archie may think that this is inconsistent with his views, but it is at least similar to his views.

First, it is fully permissible in formal mathematical proofs to assume that P is true, deduce logically that the assumption leads to a contradiction, and thus conclude that P is false. All that is "forbidden" is to assume the truth of P in order to prove P true. This has nothing to do with notation: it has to do with what is meant by "proof." If your notation formally implies the truth of what is to be demonstrated, then it is incorrect in a formal demonstration.

Second, during the exploration phase, there are no "rules" at all that I am aware of. In fact, one way to find a proof of P is to assume its truth, derive a previously proven proposition, and finally reverse the chain of reasoning (if possible). I think Pappus described that method about 2000 years ago. Explorations are not proofs.

Example. I may well write $f(x) = \dfrac{sin(x)}{x}$nformally

without explicitly noting that the function does not exist if x = 0. Of course, I am not a mathematician.

But if I was trying to demonstrate formally what the limit of f(x) was at zero, I certainly would not use the notation f(0). Personally, even in my informal explorations, Iprobably would not use the notation f(0) because it might lead me into an error of logic, but that is prudence, not a rule.

7. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Originally Posted by JeffM
Archie may think that this is inconsistent with his views, but it is at least similar to his views.

First, it is fully permissible in formal mathematical proofs to assume that P is true, deduce logically that the assumption leads to a contradiction, and thus conclude that P is false. All that is "forbidden" is to assume the truth of P in order to prove P true. This has nothing to do with notation: it has to do with what is meant by "proof." If your notation formally implies the truth of what is to be demonstrated, then it is incorrect in a formal demonstration.

Second, during the exploration phase, there are no "rules" at all that I am aware of. In fact, one way to find a proof of P is to assume its truth, derive a previously proven proposition, and finally reverse the chain of reasoning (if possible). I think Pappus described that method about 2000 years ago. Explorations are not proofs.

Example. I may well write $f(x) = \dfrac{sin(x)}{x}$nformally

without explicitly noting that the function does not exist if x = 0. Of course, I am not a mathematician.

But if I was trying to demonstrate formally what the limit of f(x) was at zero, I certainly would not use the notation f(0). Personally, even in my informal explorations, Iprobably would not use the notation f(0) because it might lead me into an error of logic, but that is prudence, not a rule.
We're getting pretty close to what I'm wondering in fact. But first of all, let us all agree upon something, because in another discussion, I think that this wasn't clear and got me into problems. I fully agree with all of you that one cannot *use* a non-existing mathematical object. However, what I'm trying to find out, is:
- whether using a notation that implies a non-existing object is something that is avoided in formal publications (say, journal articles, textbooks,...), but can be used without problems in, exactly, an "explanation of one's reasoning".
- whether using such a notation is a "no go", culturally, in the maths world (but people can do it informally ; a bit like "insulting people is not done, culturally, but in informal conversations, when we're angry, it can happen").
- whether there are good LOGICAL reasons never to do so, because it is known to lead, in certain, even though exotic, cases into error.

I thought that the situation was the first one, but it might be the second or even, the third.

I explicitly thought that using such notation is especially helpful when a student tries to explain his reasoning to his teacher - but one sees it the other way around, I have the impression.

To come to your example, if f(x) = sin(x)/x, and the question is whether this function exists in 0, my idea was that this question can be formalized, from the point of view of the student, exactly as:
"does f(0) exist ?"
and then go on exploring what f(0) might imply:

to obtain f(0), one has to substitute 0 for x in the expression, so that should give sin(0)/0 = 0/0.
Now, sin(0) is a mathematically existing object, and evaluates to a number 0 ; 0 is a number, but 0/0 doesn't evaluate to a number. From exactly this impossibility, one can then conclude that f(0) doesn't exist.

I would find, as a teacher, that a student writing this, has perfectly understood what was going on. But of course, nobody writes this in a formal publication, because when you write a publication, you've done that already on scratch paper, and you KNOW that the division by 0 was the culprit, so you don't have to "deduce" it. You can directly write: "I cannot evaluate f in 0, because the denominator becomes 0".

But, as a student, confronted to the question and not knowing the answer already, your "exploratory phase" is exactly what is written down above, no ?
You can of course, as a student, pretend to write a formal answer, and HIDE this reasoning to the teacher, but then the teacher cannot see how you got there in the first place. This is why I'm somewhat surprised that it is exactly with students that one frowns upon such "forbidden" notation, because it is the place where it is useful, I would think.

Now, as I said, it can be a "cultural" thing, it is simply not done. It can even be a logical thing, in the sense that one can get into real errors doing so (although I think that formal logic tells us exactly the opposite).

Mind you, if we write: f(x) = x^2/x, we CANNOT say that f(x) = x * x/x = x * 1 = x, and then put 0 into it. The reason would be that we've been using calculation rules which were simply not valid (0/0 is not 1). As long as one hasn't established that x is a non-zero number, one cannot simplify x/x = 1 of course. But writing f(0) = 0^2/0, which is 0/0 which is not a number, doesn't seem problematic to me.

In a similar way, the derivative of a function in a is DEFINED as the limit of (f(a+h)-f(a))/h if that limit exists, and is said not to exist if that limit doesn't exist. So to me, there is full "right of substitution" between f'(a) and lim_{h->0} (f(a+h)-f(a))/h: if one exist, the other exists and is equal to it, in both ways.

As such, answering the question whether the derivative of f exists in a, I'd start out writing:
f'(a) = lim .... and explore whether this expression of a limit leads to a result. If it doesn't, f'(a) is demonstrated not to exist ; if it does, f'(a) is demonstrated to exist, and evaluates to exactly the result I found, no ?

In other words, when writing f'(a) = lim..... , I'm just writing down my exploration of my way of finding the answer.

In a publication, of course, I KNOW the answer (I already did it) so I don't have to explore it. I simply SHOW that it is the answer. But one doesn't see how I got there.

8. ## Re: "forbidden notations", are they forbidden, and if so, why ?

The point is that we use the mathematical language to communicate. Deviation from that language can cause confusion and misunderstanding.

Sure, there are occasions on which we can play fast and loose, but if you wish to avoid that, you must try to use the language properly. If you fail to do so and misunderstanding results, it is your fault. Try to avoid bad habits.

9. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Probably at some time you read The Logic of Scientific Discovery,wherein a distinction is made between the logic and the psychology of discovery. There are no formalities in how we come up with hypotheses: analogy and metaphor and idealization and even outright delusion may lead to a fruitful hypothesis. Different fields have different methods for confirming hypotheses, and in that respect you can say that there is a "culture of mathematics" or a "culture of history" although I would prefer to say a "culture of mathematicians" or a "culture of historians" because I am suspicious of reification of mental concepts.

Mathematicians have developed an artificial language that is very effective (though not perfect) in avoiding ambiguity and contradiction. As Archie says, you have no one to blame but yourself if you misuse that language and cause misunderstanding. If I say "work" but mean "torque" (they do rhyme), then people who use those words in a commonly accepted technical sense are going to be annoyed.

As Archie said in an early post, there are degrees of appropriate formality. There used to be a nuisance at some math help sites who would carp if someone answered a question with

$f(x) = \dfrac{sin (x)}{x}$ instead of $x \ne 0 \implies f(x) = \dfrac{sin (x)}{x}.$

Most of us dismissed him or her as a jerk. Such sites are not formal publications by and for professionals.

However, it is highly likely to be confusing to students to use technical terms and technical notation in non-technical ways. If I have defined f(x) as sin(x) / x without any limitation on x and then use the term f(0), I may have a huge hole in my logic, and, even if my logic is not affected adversely, the student is not learning how to think or communicate mathematically.

10. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Originally Posted by Zoroaster
I just registered to this forum to ask a question I couldn't get answered elsewhere.
Let me sketch the situation: I'm a dad, I'm an experienced physicist with quite some affinity for mathematics, but I was baffled recently by something that happened to my son, and I wanted to find out whether there was a fundamental piece of elementary knowledge about maths that I succeeded in missing for several decades, or whether it is something cultural, or whether the situation is simply weird.

I went to a teacher's forum in the country, but essentially I got insulted away from it, because I dared questioning a practice, and I don't accept an argument of authority as an explanation - but I'm totally open to any argument that holds water.

The question is this: "is there a good reason to sanction the writing down symbolically, of expressions, that turn out not to correspond to mathematically existing objects, during the phase of exploration of, exactly, their existence ? Or is this a generalized cultural thing in mathematics ? Or is this just a local quirk ?"
I have carefully read this entire thread. I think that I understand the objections.
We have a duty to communicate clearly. So I think most of this is simply about matters of style.

In graduate school I had a professor who would get really angry if someone presenting a proof said "let $x$ be ..." It should be "suppose that $x$ is ..." Style?

11. ## Re: "forbidden notations", are they forbidden, and if so, why ?

If I use the Spanish as a template, then "let $\displaystyle a$ be..." would be a contraction of "if I were to let $\displaystyle a$ be...". But in any case, $\displaystyle a$ does not have a permanent value (akin to $\displaystyle \pi$). So whatever value we give it is purely for the purposes of the argument. I think it's a style thing. I certainly don't see any confusion that might arise.

12. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Originally Posted by Plato
I have carefully read this entire thread. I think that I understand the objections.
We have a duty to communicate clearly. So I think most of this is simply about matters of style.

In graduate school I had a professor who would get really angry if someone presenting a proof said "let $x$ be ..." It should be "suppose that $x$ is ..." Style?
Ah, so it IS "culturally not done". I didn't even know that. During all of my student period, I was never frowned upon it, and I really thought that writing something like, "f(x) = x^2/x hence f(0) = 0^2/0 which is not a number so f(0) doesn't exist" was not even an *abuse* of mathematical writing, but a correct way of expresssing that you were exploring something, that turned out not to exist, in the same way that one could write: 8/2 is a natural number but 8/3 is not a natural number (but a rational number, if you want to).

But it is good to know that "in the math world" this is considered "abuse". I wonder where it comes from.

I can understand, on one hand, the pedagogical advantage of not allowing it. It is somewhat using a nuke to kill a mosquito in my eyes, but I can understand it in the following way:
if one allows for the formal expression of non-existing objects, one has to keep a clear distinction between those objects that do exist (and which one can use to calculate according to the calculation rules of their structure) and those objects that (potentially) don't exist, and which are hence, up to that point, simple symbolic expressions that do not obey any specific calculational properties. Maybe that pedagogically, keeping this distinction is difficult, or considered dangerous, and hence, one simply forbids one class, so that this confusion is never possible. I think this is not very profound, in the sense that one can meet also different existing mathematical objects with different structural rules, and one shouldn't confuse them either (if A and B are matrices, A.B is not equal to B.A ; if A and B are numbers, then A.B = B.A). So to save one from confusion is an argument, but not very strong in my opinion.

But I think, on the other hand, that forbidding such notation also takes away the opportunity for the student to express his actual thinking symbolically, which is pedagogically, rather problematic.

13. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Originally Posted by Zoroaster
But I think, on the other hand, that forbidding such notation also takes away the opportunity for the student to express his actual thinking symbolically, which is pedagogically, rather problematic.
This is the problem. If the student is thinking $f(0)=\frac00$, then the student doesn't clearly understand what is really happening. He needs to modify his thinking. The idea is that one should learn accurately how to express oneself. Once one knows and understands the syntax and the structures it represents, one is in a position to judge whether there is value in breaking the rules in each individual case.

14. ## Re: "forbidden notations", are they forbidden, and if so, why ?

Originally Posted by Archie
This is the problem. If the student is thinking $f(0)=\frac00$, then the student doesn't clearly understand what is really happening.
This is strange. I would think that the statement that if $f(x) = x^2/x$, then of course $f(0) = \frac00$. Not in the field of real numbers, but on the level of formal expressions. But I seem to have a different view on what is actually meant with a formal expression than several mathematicians, and I have to say that I'm puzzled by this, as it is how I remember having understood things always.

Let me explain how I see this:

When writing down $f(x) = x^2/x$, I see this as a formal expression, of which we will try to interpret it in the field of real numbers, as far as we can. If we can completely interpret it, then it means that there is an object (an element) in this field that is represented by f(x). If we cannot interpret it, then it means that the formal expression is not representing such an element.

But the "rule of substitution" is outside of this field, and is already present on the level of the formal language. If you write a formal expression f(x) = .... x .... x ...., then I see this as a formal expression in which one can substitute the symbol x by whatever we put between the parentheses: f(a) = ..... a .... a ....,
f(banana) = .... banana ..... banana .....

So of course that f(0) then represents the formal expression .... 0 ..... 0 .....

As such, when writing f(x) = x^2/x and we write f(0), we mean 0^2/0, just as a formal substitution.

But now the thing is that we wanted to see whether this formal expression obtained, can be interpreted in the field of real numbers. The writing down of the formal expression has to correspond to a meaningful SYNTAX in this field, and it is: the expression x^2/x represents a tree structure, with on top, a division (which is a shorthand for a multiplication, and in "inverse_element" function call), and in one branch, simply the leaf "x", and in the other branch, the square of x (which is in itself a subtree consisting of a multiplication, and two identical leaves "x").

If we have the expression 0^2/0, we have the same, but with the leaves "x" replaced by the leaves "0".

Now, the sub-tree of the square CAN be interpreted in the field of real numbers: the two leaves 0 exist in that field, and the multiplication does exist, so this sub-tree can be replaced by the real number it represents: 0^2 = 0.
However, we end up with a tree with on top a division, and two leaves "0". THIS tree cannot be interpreted any more within the field of real numbers. So the formal expression 0/0 has no interpretation in the field of real numbers, and hence, it stops there. 0/0 cannot be evaluated to a real number.

But I don't see why f(0) is not representing formally the tree that is written 0/0.

Consider f(3) = 3^2/3 = 9/3 = 3. The fact that f(3) does evaluate to a real number, 3, means that f(3) is a real number. If f is to be a real function, the fact that f(3) evaluates to the real number 3 means that the couple (3,3) is an element of f.
The fact that f(0) doesn't evaluate to a real number means that there is no couple (0,?) that can be an element of f.

Of course, you can see this differently: f(0) is the image of 0 under the function f, and the expression f(x) = x^2/x only holds in so far as x^2/x is a real number ; if x^2/x is not a real number, f(x) cannot be written as x^2/x. But if it is given that whatever f(x) is supposed to be, it should be the formal expression x^2/x, to be interpreted in the field of real numbers, we can just as well claim that f(x) is always x^2/x and if ever x^2/x is not the formal expression of a real number, then that means that f(x) is "a formal expression that is not a real number" no ?

In any case, I don't see what's wrong, if it is a GIVEN that f(x) = x^2/x, to consider the formal substitution f(0) = 0^2/0, and find out that this is not a real number ? What else can you do with f(0) ? You have no other given about it, than the formal expression, and nobody was telling you that you shouldn't use that expression when x was to be replaced by 0 (on the contrary).

If one wanted you to avoid writing f(0), one should have specified already that the expression f(x) = x^2/x is not to be used when x = 0. But that was exactly the question to be answered: for what values of x does this identification make sense ?

Whether one sees f(x) FIRST as a formal expression (and then the substitution is valid) and only considers it AFTERWARDS as the (image of x though f) - because that's the only thing we know about f ; or whether one first sees f(x) as the image of x through x, but then the assignment to the formal expression doesn't make sense in certain points, is a matter of taste, no ?

In the end, we can only consider f(x) to be the image through a function f in a given number, when the expression on the right evaluates to a real number after substitution with that given number. When one does this "from right to left", or from "left to right" is just a matter of convenience, I would think.

I can understand the trouble, in that we have a double view on f(x) here; one as a formal expression relation (with no direct relation to a function f), and one as "image under the function of f", but this kind of jumping back and forth is often done in informal writings I would think.

15. ## Re: "forbidden notations", are they forbidden, and if so, why ?

tldr

But
Originally Posted by Zoroaster
This is strange. I would think that the statement that if $f(x) = x^2/x$, then of course $f(0) = \frac00$. Not in the field of real numbers, but on the level of formal expressions.
The expression $\displaystyle f(x) = \frac{x^2}{x}$ is itself shorthand for
$\displaystyle f: \mathbb R \mapsto \mathbb R$ such that $\displaystyle f(x) = \frac{x^2}{x}$
(but see below). And this specifies that the range of the function is the real numbers. Since $\displaystyle \frac00$ is not a real number $\displaystyle f(0) \ne \frac00$.

Of course, the expression above is not completely accurate either because the fact that $\displaystyle \frac00$ is not a number means that the function $\displaystyle f$ is not defined at zero and neither is zero in the range of the function, so we really ought to write something like
$\displaystyle f: \mathbb R \setminus \{0\} \mapsto \mathbb R \setminus \{0\}$ such that $\displaystyle f(x) = \frac{x^2}{x}$

And this perfectly illustrates the point that if you do not properly learn the correct syntax and how to use it, breaking the rules is likely to cause confusion. It is only by understanding that $\displaystyle f(x) = \frac{x^2}{x}$ implicitly includes all the details concerning the domain and range of the function that we are able to properly use the notation.

Of course, there are times when we like to play fast and loose with this precisely because the formality gets in the way of intuitive understanding or obfuscates meaning through verbosity. But equally we must be aware that this creates ambiguity that has no place in mathematics. Differential equations are excellent examples of an area in which we very often prefer not to deal with domains and ranges in order to describe the most general solution that we can. But in doing so the meaning of "solution" and especially useful concepts such as the existence and uniqueness of solutions become very imprecise.