Can't make simple iteration work

Hi,

I let for and rewrite the equation such that .

I then rewrite to the equivalent form where .

Since I have that and .

By the mean value theorem I know that there exists a number such that

.

Since , is monotonic decreasing and so

.

is then a contraction with .

I then look at the sequence and write a small piece of code that prints out the first 20 iterations.

(0, 1)

(1, 1.3862943611198906)

(2, 1.6407200993500939)

(3, 1.778701297541748)

(4, 1.846264051572333)

(5, 1.8777524625894917)

(6, 1.8920959939166664)

(7, 1.8985621417401013)

(8, 1.9014635019263759)

(9, 1.9027626111520601)

(10, 1.9033437520231324)

(11, 1.9036036091052895)

(12, 1.9037197823293694)

(13, 1.9037717150438356)

(14, 1.9037949295626633)

(15, 1.9038053065444669)

(16, 1.9038099450615822)

(17, 1.9038120184745679)

(18, 1.9038129452869204)

(19, 1.9038133595703104)

(20, 1.9038135447541562)

The maximum number of iterations I need to get an accuracy of

where

is the kth iteration and

is the fixed point og

is given by:

If I calculate this with

,

,

and

(to get an accuracy of 4 decimal digits) I get approx. 41 iterations.

But from my calculation of the sequence it seems as if the number of iterations needed for an accuracy of 4 decimal digits is not more than about 15.

Funny thing is, in the book I am reading they use

and that seems to work..

Sorry for the long post, but I hope someone can help me out.