No, you are not. u and v refer to vectors, not to their components. And it is not necessary for a vector space to be given a specific basis in order to talk about vectors in that space or about a linear transformation from one space to another.
If I write the linear transformation as a matrix, and write, say,
, then I am thinking of as representing the vector where and are basis vectors for the two dimensional space U and as representing the vector where and are basis vectors for the two dimensional vector space V.
It is only when we choose to represent vectors and linear transformations as matrices that the components in a specific basis become important.
I’m getting a little confused sinceI don't see how you get that. I could, for example, take A to be the derivative operator on the space of polynomials of degree 2 or less: . In that u and v are vectors: and .T(u) = v refers to u, v as vectors and not their coordinates, but then Au = v wouldn’t make sense in a polynomial space where, for example, u = α x²+ βx+ γ, unless u, v refers to the coordinates in column form.
If I choose to write A in matrix form, taking as basis , in that order, then I would think of the polynomial as represented by the column vector - notice that order of the vectors in the basis is also important- if I had taken the basis to be the same 3 functions but in order , that same polynomial would be represented by .
Using the first order, This linear transformation would be represented as
Well, we don't have to represent matrices as columns or rows of numbers but we can.If so, does this mean that given a vector space V over a field K, we are essentially using elements of K as entries in a coordinate system with respect to the basis of V s.t. each distinct element in V has a unique coordinate representation or equivalently, is a unique L.C. of basis vectors?
Yes, this is true! The fact that vectors in any n dimensional vector space over field K can be represented, upon choice of a basis, as ordered n-tuples of elements of K means that all vector spaces of the same finitedimension, over the same field, are isomorphic.So does this mean that all spaces over the same field with the same dimension are isomorphic? Since, for example, the row vector (α , β, γ) in K³ and the polynomial α x²+ βx+ γ in K[x]≤2 both have the same coordinates (namely (α , β, γ)) and so a L.T. to coordinates (α’ , β’, γ’) would be represented by the same matrix for both spaces since the matrix only transforms one coordinate system to another while applying a given L.T. on the way.
Thanks very much.
You can demonstrate the isomorphism by choosing a specific basis, in a specific order, for each vector space and constructing a function that maps each basis vector into the corresponding basis in the other space, then extending it to other vectors "by linearity" .
Notice the restriction to finite dimensional spaces. Infinite dimensional spaces (such as the space of all polynomials or the space of all continuous functions) even if their bases have the same cardinality, over the same field, are not necessarily isomorphic. "Linear Algebra" normally restricts itself to finite dimensional vector spaces (Halmos' famous text on Linear Algebra was titled "Finite Dimensional Vector Spaces") while infinite dimensional vector spaces,typically function spaces, are dealt with in "Functional Analysis".