1. ## Analysis Help

Suppose that $\displaystyle g:[0,1] \rightarrow \mathbb{R}$ is continuous, g(0) = g(1) = 0, and for any $\displaystyle c \in (0,1)$ there is some $\displaystyle k > 0$ such that:

$\displaystyle 0 < c-k < c < c+k < 1$ and $\displaystyle g(c) = \frac{1}{2}(g(c+k) + g(c-k))$.

Prove that g(x) = 0 for any x in [0,1].

My work:

Since g is a continuous mapping of a closed interval, it must achieve its maximum, say M, on [0,1]. So, let A = { $\displaystyle x \in [0,1] : g(x) = M$}. Since g achieves its maximum, A is nonempty. A is clearly bounded above by 1, so $\displaystyle sup(A) = x_0$ exists.

Suppose that $\displaystyle x_0 \ne 0$. Take $\displaystyle x_0 > 0$. Then, $\displaystyle x_0 \in (0,1)$ (since g(0) = 0 = g(1)).

Then, there is a k > 0 such that $\displaystyle (x_0 - k)$ and $\displaystyle (x_0 +k) \in (0,1)$ and that $\displaystyle M = g(x_0) = \frac{1}{2} (g(x_0 + k) + g(x_0 -k))$.

But, since M is the maximum of g, this implies that $\displaystyle g(x_0 + k) = g(x_0 -k) = g(x_0) = M$.

This is where I'm getting stuck. Can anyone point me in the right direction?

2. Originally Posted by Math Major
Suppose that $\displaystyle g:[0,1] \rightarrow \mathbb{R}$ is continuous, g(0) = g(1) = 0, and for any $\displaystyle c \in (0,1)$ there is some $\displaystyle k > 0$ such that:

$\displaystyle 0 < c-k < c < c+k < 1$ and $\displaystyle g(c) = \frac{1}{2}(g(c+k) + g(c-k))$.

Prove that g(x) = 0 for any x in [0,1].

My work:

Since g is a continuous mapping of a closed interval, it must achieve its maximum, say M, on [0,1]. So, let A = { $\displaystyle x \in [0,1] : g(x) = M$}. Since g achieves its maximum, A is nonempty. A is clearly bounded above by 1, so $\displaystyle sup(A) = x_0$ exists.

Suppose that $\displaystyle x_0 \ne 0$. Take $\displaystyle x_0 > 0$. Then, $\displaystyle x_0 \in (0,1)$ (since g(0) = 0 = g(1)).

Then, there is a k > 0 such that $\displaystyle (x_0 - k)$ and $\displaystyle (x_0 +k) \in (0,1)$ and that $\displaystyle M = g(x_0) = \frac{1}{2} (g(x_0 + k) + g(x_0 -k))$.

But, since M is the maximum of g, this implies that $\displaystyle g(x_0 + k) = g(x_0 -k) = g(x_0) = M$.

This is where I'm getting stuck. Can anyone point me in the right direction?
You almost got it.

Let $\displaystyle \displaystyle \sup_{x\in[0,1]}f(x)=\alpha$ and $\displaystyle \beta=\sup f^{-1}\left(\{\alpha\}\right)$. We claim that either $\displaystyle \beta=1$ or $\displaystyle \beta=0$. To see this suppose not, then $\displaystyle \beta\in(0,1)$. Thus, by assumption we have some $\displaystyle k>0$ such that $\displaystyle 0<\beta-k<\beta<\beta+k<1$ such that $\displaystyle f(\beta)=\frac{1}{2}\left(f(\beta-k)+f(\beta+k)\right)$. Note that both $\displaystyle f(\beta+k),f(\beta-k)\leqslant f(\beta)$ and so in particular if $\displaystyle f(\beta+k)\ne f(\beta)$ then $\displaystyle f(\beta)=\frac{1}{2}\left(f(\beta-k)+f(\beta+k)\right)<\frac{1}{2}\left(f(\beta)+f(\ beta)\right)=f(\beta)$. But, this is evidently a contradiction. It follows that $\displaystyle f(\beta+k)=f(\beta)=\alpha$ and so $\displaystyle \beta+k\in f^{-1}\left(\{\alpha\}\right)$ which contradicts that $\displaystyle \beta=\sup f^{-1}\left(\{\alpha\}\right)$. Thus, it follows that $\displaystyle \beta=0,1$ but either way this means that $\displaystyle \displaystyle \sup_{x\in[0,1]}f(x)=0$. Doing a similar analysis shows that $\displaystyle \displaystyle \inf_{x\in[0,1]}f(x)=0$. Thus, $\displaystyle f=0$

3. Ah, thank you so much!

I was being too dumb to realize that $\displaystyle g(x_0 + k) \ne g(x_0)$ since $\displaystyle x_0$ is the supremum of all points that map to the maximum of g on [0,1]. Sorry for the stupid question -_-;