Hi,
I am having some problems understanding the proof of the following theorem.

Theorem:
Suppose that g is a real-valued function, defined and continuous on a bounded closed interval [a,b] of the real line, and assume that g(x)\in [a,b] for all x\in [a,b].
Let \xi = g(\xi)\in [a,b] be a fixed point of g, and assume that g has a continuous derivative in some neighborhood of \xi with |g'(x)|<1. Then the sequence (x_k) defined by x_{k+1}=g(x_k),k\geq 0, converges to \xi as k\rightarrow\infty, provided that x_0 is sufficiently close to \xi.

Proof:
By hypothesis, there exists h>0 such that g' is continuous in the interval
[\xi-h, \xi+h]. Since |g'(\xi)|<1 we can find a smaller interval
I_{\delta}=[\xi-\delta,\xi+\delta], where 0<\delta\leq h, such that |g'(x)\leq L| in this interval, with L<1.
To do so, take L=\frac{1}{2}(1+|g'(\xi)|) and then choose \delta\leq h such that,
|g'(x)-g'(\xi)|\leq \frac{1}{2}(1-|g'(\xi)|)
for all x in I_{\delta}; this is possible since g' is continuous at \xi.

I will stop there as already I am not sure what's going on.
Why should I take L=\frac{1}{2}(1+|g'(\xi)|)?
By the way, in this book L is used to denote the "contraction factor" such that;
|g(x)-g(y)|\leq L|x-y| for all x,y\in[a,b].

Thanks.