(Hi)

Quote:

here are a few pieces of answer...

These don't look like *pieces*, they just explain what I had been looking for...

Quote:

This is called a Galton-Watson tree. From one ancestor (the root of the tree), we have a random number $\displaystyle Z_1$ of children, each of which itself has random numbers of children, etc., where the numbers of children are all independent of each other. This is a genealogic tree where the numbers of children is random: k children (possibly 0) with probability $\displaystyle p_k$. A basic question is: does the family eventually get extinct? i.e. is the tree finite?

In this case, $\displaystyle \Omega$ would be the set of trees (or a larger set), or an equivalent representation of trees. A convenient choice is to let $\displaystyle U=\cup_n \mathbb{N}^n$, the set of all finite sequences of integers, and $\displaystyle \Omega=\mathbb{N}^U$, the set of positive-integer sequences indexed by elements of $\displaystyle U$. Intuitively, an individual corresponds to a sequence $\displaystyle u=a_1a_2\cdots a_n\in U$ if it is obtained from the root as the $\displaystyle a_n$-th child of the $\displaystyle a_{n-1}$-th child of .... of the $\displaystyle a_1$-th child of the root. And the number of children of this individual is encoded in the index $\displaystyle n_u$ of the tree $\displaystyle T=(n_u)_{u\in U}\in \mathbb{N}^U$ (with $\displaystyle n_u$ arbitrary if $\displaystyle u$ is not connected to the root, thus there are more codings than trees).

Well, you got it absolutely right... I should've mentioned it, it would've spared you from too much writing, sorry :(

It's a paper about GW trees, and it's defined almost the same way as you did... (a tree is defined to always contain the root element)

Quote:

Actually, what I just did is "prove" your statement, assuming that the existence of an infinite-product measure is trivial, which it is not... This is even probably the reason why this statement is outlined by the author. NB: if the author starts with such a statement, you can expect sharp rigor in the following!

He doesn't start with this statement, but I think it's quite a rigorous paper...

That's not nice from him to have outlined the statement, it bugged me a lot :D

Quote:

The law of T under P is just the law of T when the probability space (here, $\displaystyle \Omega$) is endowed with the probability measure P. In other words, this is indeed the image measure of P by T.

Okay, I feel better now that I know that :D

Quote:

There are always two possible viewpoints in any probabilistic setting: either a fixed undefined very large probability space $\displaystyle (\Omega,P)$ on which we define several random variables, this is the usual case in introductory courses; or a space $\displaystyle \Omega$ with several distributions and the identity map as a unique random variable.

The interest in the first case is simplicity and universality (it doesn't matter what base space we use, so why specify); it assumes however that we rely on sometimes not obvious existence theorems.

The second case is useful when one wants to study the same random variable under various distributions. For instance, if we want to vary a parameter (like the parameter of a Bernoulli), we simply introduce a family of probabilities indexed by this parameter, and we say: "under $\displaystyle P_p$, ...." to say that we consider the value p of the parameter.

Oh yeah, actually we work with that kind of things in statistics, defining the probability space $\displaystyle (\mathbb{R}^k,\mathcal{B}_{\mathbb{R}^k},P_\theta) _{\{\theta\in\Theta\}}$

Quote:

This is a very convenient setting for Markov chains, where the parameter is the starting point. We have one random variable $\displaystyle (X_n)_{n\geq 0}$ which is the identity map on $\displaystyle E^{\mathbb{N}}$, and one measure $\displaystyle P_x$ for every site $\displaystyle x\in E$, which is the law of the Markov chain (with some given transition matrix) starting at $\displaystyle x$. This allows to give meaning to expressions like $\displaystyle P_x(X_2=y)=\sum_z P_x(X_1=z)P_z(X_1=y)$ (i.e. the Markov property).

But in this example of the Markov property, there isn't a unique random variable, is there ?

Hmm I think it's likely that there will be more questions, but not because your excellent explanations weren't sufficient, it's just that I may need to read this and to be explained this several times, in different ways =)

Thanks, as always...