Originally Posted by

**tbyou87** Suppose we have a continuous function f:[a,b] -> R such that f(a)+f(b)=0.

Show that f(x)=0 for some x element [a,b].

So f(a)= -f(b). Then, if f(a) is positive, then f(b) is negative or vise versa. WLG assume f(a) is positive and f(b) is negative. Since f(b)<0<f(a), we can use the intermediate value theorem which states that there exists at least one x element [a,b] such that f(x)=0.

I have just one question provided that my logic above is correct. What happens if f(a) and f(b) are both 0. The intermediate value theorem in my book reads:

If f is a continuous real-valued function on an interval I, then f has the intermediate value property on I: Whenever a,b element I, a<b and y lies between f(a) and f(b) [i.e. f(a)<y<f(b) or f(b)<y<f(a)], there exists at least one x element (a,b) such that f(x)=y.

According to this definition and given f(a)=f(b)=o the statement f(a)<y<f(b) doesn't seem to make sense.

Thanks