I'm having difficulties understanding some material in a book I'm reading. (Hrbacek, K., Jech T., Introduction to set theory, 3rd Edt.). In the book there is a presentation of three variants of Recursion Theorem. The first variant states:

For any set $\displaystyle A$, any $\displaystyle a\in A$ and any function $\displaystyle g:A \times N \to A$ there exists unique infinite sequence $\displaystyle f:N \to A$ such that

a/ $\displaystyle f_{0}=a$

b/ $\displaystyle f_{n+1}=g(f_{n},n)$

I understand that theorem is meant to prove existence of functions, described recursively, like n! And I think I understand the proof. It introduces the term "n-step computation" which is finite sequence. The first element is the initial value and every next element is calculated from the previous, using g. Each separate computation is a function on some $\displaystyle n \in N$, hence it's a set of ordered pairs. Therefore each n-step computation is element of the power set of $\displaystyle N \times A$. Then the set of all computations may be defined:

$\displaystyle \{t \in P(N \times A) | \text{t is an n-step computation for some n}\}$

The proof procedes to show that all n-step computations for certain g are compatible functions, hence their union is a function. The proofs completes with establishing that the function has the desired properties.

So far so good, but then comes the second version - more general construct for functions like Fibonacci sequences in which each member of the sequence is calculated from several of the preceding ones:

For any set $\displaystyle S$ and any function $\displaystyle g:Seq(S) \rightarrow S$ there exists unique sequence $\displaystyle f:N \rightarrow S$ such that:

$\displaystyle f_{n}=g(<f_{0},f_{1},\cdots,f_{n-1}>)$ for all $\displaystyle n \in N$

($\displaystyle Seq(S)$ means all finite sequences of elements of S, < and > are used to denote sequences and <> is used to denote the empty sequence).

The proof relies on the first version of recursion theorem. We set $\displaystyle A=Seq(S)$, $\displaystyle a=<>$ and $\displaystyle G:Seq(S) \times N \rightarrow Seq(S) $, defined as:

$\displaystyle G(t,n) = t \cup \{<n,g(t)>\}$ if t is a sequence of length n

$\displaystyle G(t,n) = <>$ otherwise

Here I'm loosing the idea behind the notation and any clue how the proof stated further on works. g must stand for any function that calculates value from a given sequence. It seems that application of the first version of the recursion theorem yields a sequence that can be described this way:

$\displaystyle F_{0}=<>$

$\displaystyle F_{n+1}=F_{n}\cup\{<n,g(t)>\}$

This is actually in the book. Then the book states by now it's easy to prove that every $\displaystyle F_{n}$ is actually member of the set of all functions on n into S $\displaystyle (S^n)$. But it's not obvious to me how it is so.

$\displaystyle F_{0}=<>$

$\displaystyle F_{1}=<>\cup\{<0,g(F_{0})>\}$ ($\displaystyle g(F_{0})$ is some element of S)

$\displaystyle F_{2}=<>\cup\{<0,g(F_{0})>\}\cup\{<1,g(F_{1})>\}$

$\displaystyle \cdots$

The resulting sets of the type $\displaystyle \{<0,g(F_{0})>,<1,g(F_{1})>,\cdots\}$ do not look like function at all, if one takes a function to be set of ordered pairs. Am I reading the notation wrong here?

I am interested in other sources, where this theorem is proven this or other way, preferably more understandable for a newbie.