# permuting vectors

• May 1st 2006, 09:44 AM
mbbx3wsb
permuting vectors
Let v=(x[1],...,x[n]) be a n-dimensional real vector. Find the dimension of the subspace of R^n spanned by v and all vectors obtained by permuting the components of v.
• May 1st 2006, 12:38 PM
rgep
The answer is "it depends". Consider, for example, the permutations of (1,1,1,...1) and the permutations of (1,0,0,...,0).
• May 2nd 2006, 12:47 AM
JakeD
Rgep's examples suggest to me this conjecture: The dimension of the subspace is 0 if $\displaystyle v = (0,0, \ldots ,0)$, 1 if $\displaystyle v = (a,a, \ldots ,a)$ for some $\displaystyle a \ne 0$, and $\displaystyle n$ otherwise. Suggestions for a proof or counter-examples?
• May 2nd 2006, 10:14 AM
Rebesques
Well, that seems to be the case. The proof is really straightforward, let $\displaystyle v$ have a nonzero entry a at the i-th place: $\displaystyle v=(v_1,...,a,...,v_n)$. We can take $\displaystyle a=1$, as otherwise we can consider $\displaystyle (1/a)v$.

Let $\displaystyle V$ be the subspace created by $\displaystyle v$ and its permutations, and consider the j-axis $\displaystyle {(0,..,x_j,...,0)}$ of $\displaystyle R^n$.

We consider a permutation $\displaystyle s$ of $\displaystyle \{1,2,...,n\}$ with 1 occupying the j-th place, and denote the vector obtained by $\displaystyle v^*=(v_{s(1)},...,1,...,v_{s(n)})$.

Then

$\displaystyle v^*=(v_{s(1)},...,v_{s(j-1)},0,v_{s(j)},...,v_{s(n)})+{\bf e}_j$

where $\displaystyle {\bf e}_j$ is the jth euclidean basis vector. And
$\displaystyle (0,..,x_j,...,0)=x_j{\bf e}_j+0(v_{s(1)},...,v_{s(j-1)},0,v_{s(j)},...,v_{s(n)})$

This proves $\displaystyle V$ contains the j-th axis of $\displaystyle R^n$. So $\displaystyle V$ contains every axis, and therefore $\displaystyle V=R^n$.
• May 2nd 2006, 01:40 PM
ThePerfectHacker
Generalizing what rgep said, it seems to me I was (unable to prove it).

That there are only two cases, given,
$\displaystyle (x_1,x_2,...,x_n)$
And if $\displaystyle x_1=x_2=...=x_n\not = 0$
Then, the dimension of this subspace is 1.

Otherwise its dimension is $\displaystyle n$
• May 2nd 2006, 04:29 PM
Rebesques
-Doctor, i feel people don't pay attention to me.

-Who's next please? :D
• May 2nd 2006, 07:00 PM
ThePerfectHacker
Sorry, 'bout that did not wish to steal anyone's answers :o I just found the problem interesting.

Quote:

Originally Posted by JakeD
The dimension of the subspace is 0 if $\displaystyle v = (0,0, \ldots ,0)$

You sure about that? It looks like the dimension is one to me.
• May 2nd 2006, 08:22 PM
CaptainBlack
Quote:

Originally Posted by ThePerfectHacker
Sorry, 'bout that did not wish to steal anyone's answers :o I just found the problem interesting.

You sure about that? It looks like the dimension is one to me.

No the space consists of a single point - a subspace of $\displaystyle \mathbb{R}^n$ of dimension 0.

RonL
• May 3rd 2006, 03:49 AM
Rebesques
Quote:

Sorry, 'bout that did not wish to steal anyone's answers
Oh no no, I made no such claim :)

I just felt invisible for a moment there :o :p
• May 3rd 2006, 01:18 PM
ThePerfectHacker
Quote:

Originally Posted by CaptainBlack
No the space consists of a single point - a subspace of $\displaystyle \mathbb{R}^n$ of dimension 0.

RonL

You are right. Understand my mistake, I was considering that the set of all unit vectors for a basis and are linearly independent, my fault was that they were not elements from the zero vector.
• May 6th 2006, 02:08 PM
JakeD
In working up a proof of my conjecture, I found a counter example and had to revise the conjecture. (Rebesques, take note. :( ) Here is the revised conjecture and a proof.

Let $\displaystyle v = (x_1, x_2,\ldots, x_n).$ The dimension $\displaystyle d$ of the subspace $\displaystyle V$ spanned by $\displaystyle v$ and its permutations is

$\displaystyle \begin{array}{lll} \mathsf{A.} &0 &\mathsf{if}\ v = (0,0,\ldots,0), \\ \mathsf{B.} &1 &\mathsf{if}\ v = (a,a,\ldots,a) \ \mathsf{for\ some}\ a \ne 0, \\ \mathsf{C.} &n-1 &\mathsf{if}\ v \ne (a,a,\ldots,a) \ \mathsf{for\ any}\ a \ \mathsf{and}\ \sum_{i=1}^{n} x_i = 0, \\ \mathsf{D.} &n &\mathsf{if}\ v \ne (a,a,\ldots,a) \ \mathsf{for\ any}\ a \ \mathsf{and}\ \sum_{i=1}^{n} x_i \ne 0. \\ \end{array}$

As examples in $\displaystyle \mathbb{R}^2$, for $\displaystyle v = (1,1)\ \mathsf{or}\ (1,-1),\ d = 1,$ while for $\displaystyle v = (1,0),\ d = 2.$

Claims C and D are proven by showing that the given number is the maximum number of independent linear combinations of $\displaystyle v$ and its permutations. So assume $\displaystyle v = (x_1, x_2,\ldots, x_n) \ne (a,a,\ldots,a)$ for any $\displaystyle a.$ Under this assumption we may further assume wlog that
$\displaystyle x_1 \ne x_2$ since we can permute $\displaystyle v$ as we like. Let $\displaystyle \alpha = 1/(x_1 - x_2).$

We will use the following linear combinations of $\displaystyle v$ and its permutations.

$\displaystyle \begin{array}{lll} v_1 &=& \alpha (x_1,x_3,x_4,x_5,x_2) - \alpha (x_2,x_3,x_4,x_5,x_1) \\ &=& (1,0,0,0,-1), \\ v_2 &=& \alpha (x_3,x_1,x_4,x_5,x_2) - \alpha (x_3,x_2,x_4,x_5,x_1) \\ &=& (0,1,0,0,-1), \\ v_3 &=& \alpha (x_3,x_4,x_1,x_5,x_2) - \alpha (x_3,x_4,x_2,x_5,x_1) \\ &=& (0,0,1,0,-1), \\ v_4 &=& \alpha (x_3,x_4,x_5,x_1,x_2) - \alpha (x_3,x_4,x_5,x_2,x_1) \\ &=& (0,0,0,1,-1), \\ v_5 &=& v \\ &=& (x_1,x_2,x_3,x_4,x_5). \\ \end{array}$

Putting these vectors into a matrix $\displaystyle \begin{bmatrix} 1 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 & -1 \\ 0 & 0 & 0 & 1 & -1 \\ x_1 & x_2 & x_3 & x_4 & x_5 \\ \end{bmatrix}$
it is clear that the first $\displaystyle n -1 = 4$ vectors are linearly independent and their elements sum to zero. If the elements of the last vector do not sum to zero, it will be independent of the first $\displaystyle n - 1$ vectors and the number of independent vectors in $\displaystyle V$ will be $\displaystyle n,$ the maximum possible. This proves claim D.

Proving C, if the elements of the last vector sum to zero, it will be linearly dependent on the first $\displaystyle n - 1$ vectors. Since the last vector could represent any permutation of $\displaystyle v$, this means there cannot be more than $\displaystyle n - 1$ independent vectors in $\displaystyle V.$ This proves C and completes the proof.