Expected distance between two random vectors in n-dimensional space

Hi, Any help with this appreciated.

If I have two column vectors $\displaystyle \mathbf{a}$ and $\displaystyle \mathbf{b}$, where the $\displaystyle n$ elements of each are drawn independently from a Guassian $\displaystyle \mathcal{N}( \mu, \sigma^2)$, then the distance, $\displaystyle d$ ,between them is given by

$\displaystyle d = \sqrt{(\mathbf{a} - \mathbf{b})^\mathsf{T}(\mathbf{a} - \mathbf{b})}$

But what is the expected value of $\displaystyle d$?

I reckon, in the case where $\displaystyle \mu = \vec{0}$, it's $\displaystyle \mathbb{E}[d] = \sqrt{2n\sigma^2}$

My feeling is that a non-zero $\displaystyle \mu $ should make no difference, as it's just shifting the origin, so to speak, and the distance is determined by the relative positions of the vectors.

Can anyone prove the the general case (for$\displaystyle \mathcal{N}( \mu, \sigma^2)$ ) or show it to be wrong, and if wrong, say what it is in fact in the general case?

I notice also that my expression bears resemblance to the denominator in the normalising term in the Gaussian pdf $\displaystyle 1/\sqrt{2 \pi \sigma^2}$, except that n takes the place of $\displaystyle \pi$. Is this coincidence, or does it reflect something deeper?

Thanks in advance. MD

Re: Expected distance between two random vectors in n-dimensional space

Hey Mathsdog.

What did you get for the distribution for the distance d? (Hint: think about the sum of products of normal first and then the square root of that final answer).

Re: Expected distance between two random vectors in n-dimensional space

Right, sorted I reckon. Good hint Chiro. It all comes down to the $\displaystyle X^2 $ distribution

For the square of the norm of $\displaystyle (\mathbf{a}- \mathbf{b})$, i.e.

$\displaystyle (\mathbf{a}- \mathbf{b})^{\mathsf{T}}(\mathbf{a}- \mathbf{b}) = \mathbf{c}^{\mathsf{T}}\mathbf{c}= d^2$

we first note that for the elements of $\displaystyle \mathbf{c}$ denoted $\displaystyle c_1, c_2,\ldots, c_i , \ldots c_n$

$\displaystyle c_i \sim \mathcal{N}(0, \sigma^{2}_{c})$

where

$\displaystyle \sigma^{2}_{c}=\sigma^{2}_{a}+\sigma^{2}_{b}$

i.e. the sum of the variances of the elements of $\displaystyle \mathbf{a}$ and $\displaystyle \mathbf{b}$

From the fact that the elements of $\displaystyle 1/\sigma_c \cdot \mathbf{c}$ are distributed as follows

$\displaystyle c_i/ \sigma_c \sim \mathcal{N}(0, \sigma^{2}_{c}/\sigma^{2}_{c})$

it follows then that

$\displaystyle 1/\sigma^{2}_{c} \cdot \mathbf{c}^{\mathsf{T}}\mathbf{c} \sim X^{2}(n)$

$\displaystyle So, \mathbb{E}[1/\sigma^{2}_{c} \cdot \mathbf{c}^{\mathsf{T}}\mathbf{c}] =1/\sigma^{2}_{c} \cdot \mathbb{E}[\mathbf{c}^{\mathsf{T}}\mathbf{c}] = n$

So, $\displaystyle \mathbb{E}[\mathbf{c}^{\mathsf{T}}\mathbf{c}]=n \sigma^{2}_{c}= n(\sigma^{2}_{a}+\sigma^{2}_{b})$

Where $\displaystyle \sigma^{2}_{a}=\sigma^{2}_{b}=\sigma^{2}_{ab}$

then

$\displaystyle \mathbb{E}[\mathbf{c}^{\mathsf{T}}\mathbf{c}]= 2n\sigma^{2}_{ab}= \mathbb{E}[d^2]$

And since the distance between the two vectors $\displaystyle \mathbf{a} $ and $\displaystyle \mathbf{b}$ is always just the (positive) square root of this squared norm, d is, as previously hypothesised, given by the following

$\displaystyle \mathbb{E}[d] = \sqrt{2n\sigma_{ab}^2}$

Does that look right to you Chiro?

I think the general case for a non-0 mean would follow from a similar treatment using the noncentral chi squared distribution, but I haven't worked the details out yet.

Thanks again. MD