General concepts regarding linear transformations and matrices

Nov 2009
27
0
[FONT=&quot]Hi, am I right in thinking that for a L.T. [/FONT][FONT=&quot]T[/FONT][FONT=&quot]: U [/FONT][FONT=&quot] V[/FONT][FONT=&quot] represented by [/FONT][FONT=&quot]A[/FONT][FONT=&quot]u [/FONT][FONT=&quot]= v[/FONT][FONT=&quot], [/FONT][FONT=&quot]u[/FONT][FONT=&quot], v [/FONT][FONT=&quot]refers to the coordinates of a vector within vector spaces [/FONT][FONT=&quot]U, V[/FONT][FONT=&quot] with respect to that space’s basis, represented as a column vector? I’m getting a little confused since [/FONT][FONT=&quot]T[/FONT][FONT=&quot](u) = v[/FONT][FONT=&quot] refers to [/FONT][FONT=&quot]u[/FONT][FONT=&quot], v [/FONT][FONT=&quot]as vectors and not their coordinates, but then [/FONT][FONT=&quot]A[/FONT][FONT=&quot]u [/FONT][FONT=&quot]= v [/FONT][FONT=&quot]wouldn’t make sense in a polynomial space where, for example, [/FONT][FONT=&quot]u[/FONT][FONT=&quot] = α [/FONT]x[FONT=&quot]²+ β[/FONT][FONT=&quot] [/FONT]x[FONT=&quot] [/FONT][FONT=&quot]+ γ[/FONT][FONT=&quot], unless [/FONT][FONT=&quot]u[/FONT][FONT=&quot], v [/FONT][FONT=&quot]refers to the coordinates in column form.[/FONT]
[FONT=&quot] [/FONT]
[FONT=&quot]If so, does this mean that given a vector space [/FONT][FONT=&quot]V [/FONT][FONT=&quot]over a field [/FONT][FONT=&quot]K, [/FONT][FONT=&quot]we are essentially using elements of [/FONT][FONT=&quot]K[/FONT][FONT=&quot] as entries in a coordinate system with respect to the basis of [/FONT][FONT=&quot]V [/FONT][FONT=&quot]s.t. each distinct element in [/FONT][FONT=&quot]V[/FONT][FONT=&quot] has a unique coordinate representation or equivalently, is a unique L.C. of basis vectors?[/FONT]
[FONT=&quot] [/FONT]
[FONT=&quot]So does this mean that all spaces over the same field with the same dimension are isomorphic? Since, for example, the row vector ([/FONT][FONT=&quot]α [/FONT],[FONT=&quot] β[/FONT][FONT=&quot] [/FONT],[FONT=&quot] γ[/FONT][FONT=&quot]) in [/FONT][FONT=&quot]K[/FONT][FONT=&quot]³[/FONT][FONT=&quot] and the polynomial [/FONT][FONT=&quot]α [/FONT]x[FONT=&quot]²+ β[/FONT][FONT=&quot] [/FONT]x[FONT=&quot] [/FONT][FONT=&quot]+ γ[/FONT][FONT=&quot] in [/FONT][FONT=&quot]K[/FONT][FONT=&quot][[/FONT]x[FONT=&quot]]≤2[/FONT][FONT=&quot] both have the same coordinates (namely ([/FONT][FONT=&quot]α [/FONT],[FONT=&quot] β[/FONT][FONT=&quot] [/FONT],[FONT=&quot] γ[/FONT][FONT=&quot])) and so a L.T. to coordinates ([/FONT][FONT=&quot]α’ [/FONT],[FONT=&quot] β’[/FONT][FONT=&quot] [/FONT],[FONT=&quot] γ’[/FONT][FONT=&quot]) would be represented by the same matrix for both spaces since the matrix only transforms one coordinate system to another while applying a given L.T. on the way.[/FONT]
[FONT=&quot] [/FONT]
[FONT=&quot]Thanks very much.[/FONT]
 

HallsofIvy

MHF Helper
Apr 2005
20,249
7,909
[FONT=&quot]Hi, am I right in thinking that for a L.T. [/FONT][FONT=&quot]T[/FONT][FONT=&quot]: U [/FONT][FONT=&quot] V[/FONT][FONT=&quot] represented by [/FONT][FONT=&quot]A[/FONT][FONT=&quot]u [/FONT][FONT=&quot]= v[/FONT][FONT=&quot], [/FONT][FONT=&quot]u[/FONT][FONT=&quot], v [/FONT][FONT=&quot]refers to the coordinates of a vector within vector spaces [/FONT][FONT=&quot]U, V[/FONT][FONT=&quot] with respect to that space’s basis, represented as a column vector?
No, you are not. u and v refer to vectors, not to their components. And it is not necessary for a vector space to be given a specific basis in order to talk about vectors in that space or about a linear transformation from one space to another.

If I write the linear transformation as a matrix, and write, say,
\(\displaystyle \begin{bmatrix}3 & 1 \\ 2 & 2\end{bmatrix}\begin{bmatrix}3 \\ 2\end{bmatrix}= \begin{bmatrix}11 \\ 10\end{bmatrix}\), then I am thinking of \(\displaystyle \begin{bmatrix}3 \\ 1\end{bmatrix}\) as representing the vector \(\displaystyle 3u_1+ 1u_2\) where \(\displaystyle u_1\) and \(\displaystyle u_2\) are basis vectors for the two dimensional space U and \(\displaystyle \begin{bmatrix}11 \\ 10\end{bmatrix}\) as representing the vector \(\displaystyle 11v_1+ 10v_2\) where \(\displaystyle v_1\) and \(\displaystyle v_2\) are basis vectors for the two dimensional vector space V.
It is only when we choose to represent vectors and linear transformations as matrices that the components in a specific basis become important.

I’m getting a little confused since [/FONT][FONT=&quot]T[/FONT][FONT=&quot](u) = v[/FONT][FONT=&quot] refers to [/FONT][FONT=&quot]u[/FONT][FONT=&quot], v [/FONT][FONT=&quot]as vectors and not their coordinates, but then [/FONT][FONT=&quot]A[/FONT][FONT=&quot]u [/FONT][FONT=&quot]= v [/FONT][FONT=&quot]wouldn’t make sense in a polynomial space where, for example, [/FONT][FONT=&quot]u[/FONT][FONT=&quot] = α [/FONT]x[FONT=&quot]²+ β[/FONT][FONT=&quot] [/FONT]x[FONT=&quot] [/FONT][FONT=&quot]+ γ[/FONT][FONT=&quot], unless [/FONT][FONT=&quot]u[/FONT][FONT=&quot], v [/FONT][FONT=&quot]refers to the coordinates in column form.[/FONT]
I don't see how you get that. I could, for example, take A to be the derivative operator on the space of polynomials of degree 2 or less: \(\displaystyle Au= A(\alpha x^2+ \beta x+ \gamma)= 2\alpha x+ \beta= v\). In that u and v are vectors: \(\displaystyle u= \alpha x^2+ \beta x+ \gamma\) and \(\displaystyle v= 2\alpha x+ \beta\).

If I choose to write A in matrix form, taking as basis \(\displaystyle 1, x, x^2\), in that order, then I would think of the polynomial \(\displaystyle \alpha x^2+ \beta x+ \gamma\) as represented by the column vector \(\displaystyle \begin{bmatrix}\alpha \\ \beta \\ \gamma\end{bmatrix}\)- notice that order of the vectors in the basis is also important- if I had taken the basis to be the same 3 functions but in order \(\displaystyle x^2, 1, x\), that same polynomial would be represented by \(\displaystyle \begin{bmatrix}\gamma \\ \beta \\ \alpha \end{bmatrix}\).

Using the first order, This linear transformation would be represented as \(\displaystyle \begin{bmatrix}0 & 0 & 0 \\ 2 & 0 & 0 \\ 0 & 1 & 0\end{bmatrix}\)

[FONT=&quot] [/FONT]
[FONT=&quot]If so, does this mean that given a vector space [/FONT][FONT=&quot]V [/FONT][FONT=&quot]over a field [/FONT][FONT=&quot]K, [/FONT][FONT=&quot]we are essentially using elements of [/FONT][FONT=&quot]K[/FONT][FONT=&quot] as entries in a coordinate system with respect to the basis of [/FONT][FONT=&quot]V [/FONT][FONT=&quot]s.t. each distinct element in [/FONT][FONT=&quot]V[/FONT][FONT=&quot] has a unique coordinate representation or equivalently, is a unique L.C. of basis vectors?[/FONT]
Well, we don't have to represent matrices as columns or rows of numbers but we can.

[FONT=&quot] [/FONT]
[FONT=&quot]So does this mean that all spaces over the same field with the same dimension are isomorphic? Since, for example, the row vector ([/FONT][FONT=&quot]α [/FONT],[FONT=&quot] β[/FONT][FONT=&quot] [/FONT],[FONT=&quot] γ[/FONT][FONT=&quot]) in [/FONT][FONT=&quot]K[/FONT][FONT=&quot]³[/FONT][FONT=&quot] and the polynomial [/FONT][FONT=&quot]α [/FONT]x[FONT=&quot]²+ β[/FONT][FONT=&quot] [/FONT]x[FONT=&quot] [/FONT][FONT=&quot]+ γ[/FONT][FONT=&quot] in [/FONT][FONT=&quot]K[/FONT][FONT=&quot][[/FONT]x[FONT=&quot]]≤2[/FONT][FONT=&quot] both have the same coordinates (namely ([/FONT][FONT=&quot]α [/FONT],[FONT=&quot] β[/FONT][FONT=&quot] [/FONT],[FONT=&quot] γ[/FONT][FONT=&quot])) and so a L.T. to coordinates ([/FONT][FONT=&quot]α’ [/FONT],[FONT=&quot] β’[/FONT][FONT=&quot] [/FONT],[FONT=&quot] γ’[/FONT][FONT=&quot]) would be represented by the same matrix for both spaces since the matrix only transforms one coordinate system to another while applying a given L.T. on the way.[/FONT]
[FONT=&quot] [/FONT]
[FONT=&quot]Thanks very much.[/FONT]
Yes, this is true! The fact that vectors in any n dimensional vector space over field K can be represented, upon choice of a basis, as ordered n-tuples of elements of K means that all vector spaces of the same finitedimension, over the same field, are isomorphic.

You can demonstrate the isomorphism by choosing a specific basis, in a specific order, for each vector space and constructing a function that maps each basis vector into the corresponding basis in the other space, then extending it to other vectors "by linearity" \(\displaystyle T(\alpha_1u_1+ \alpha_2u_2+ \cdot\cdot\cdot+ \alpha_nu_n)= \alpha_1T(u_1)+ \alpha_2T(u_2)+ \cdot\cdot\cdot+ \alpha_nT(u_n)\).

Notice the restriction to finite dimensional spaces. Infinite dimensional spaces (such as the space of all polynomials or the space of all continuous functions) even if their bases have the same cardinality, over the same field, are not necessarily isomorphic. "Linear Algebra" normally restricts itself to finite dimensional vector spaces (Halmos' famous text on Linear Algebra was titled "Finite Dimensional Vector Spaces") while infinite dimensional vector spaces,typically function spaces, are dealt with in "Functional Analysis".
 
  • Like
Reactions: tunaaa
Nov 2009
27
0
Thanks for your comprehensive response - it's totally clear now.