I'm having difficulties understanding some material in a book I'm reading. (Hrbacek, K., Jech T., Introduction to set theory, 3rd Edt.). In the book there is a presentation of three variants of Recursion Theorem. The first variant states:

For any set , any and any function there exists unique infinite sequence such that

a/

b/

I understand that theorem is meant to prove existence of functions, described recursively, like n! And I think I understand the proof. It introduces the term "n-step computation" which is finite sequence. The first element is the initial value and every next element is calculated from the previous, using g. Each separate computation is a function on some , hence it's a set of ordered pairs. Therefore each n-step computation is element of the power set of . Then the set of all computations may be defined:

The proof procedes to show that all n-step computations for certain g are compatible functions, hence their union is a function. The proofs completes with establishing that the function has the desired properties.

So far so good, but then comes the second version - more general construct for functions like Fibonacci sequences in which each member of the sequence is calculated from several of the preceding ones:

For any set and any function there exists unique sequence such that:

for all

( means all finite sequences of elements of S, < and > are used to denote sequences and <> is used to denote the empty sequence).

The proof relies on the first version of recursion theorem. We set , and , defined as:

if t is a sequence of length n

otherwise

Here I'm loosing the idea behind the notation and any clue how the proof stated further on works. g must stand for any function that calculates value from a given sequence. It seems that application of the first version of the recursion theorem yields a sequence that can be described this way:

This is actually in the book. Then the book states by now it's easy to prove that every is actually member of the set of all functions on n into S . But it's not obvious to me how it is so.

( is some element of S)

The resulting sets of the type do not look like function at all, if one takes a function to be set of ordered pairs. Am I reading the notation wrong here?

I am interested in other sources, where this theorem is proven this or other way, preferably more understandable for a newbie.