We have that
.
Since , the column space of is contained in the null space of . Hence, and the result follows.
it works out the same. suppose the columns of A are v1,v2,...,vn = A(e1),A(e2),....,A(en).
then w = a1v1+a2v2+...+anvn = a1(A(e1))+a2(A(e2))+...+an(A(en)) = A(a1e1+a2v2+...+anen) = A(a1,a2,...,an).
so A(w) = A^2(a1,a2,..,an) = 0.
linear combinations of the vj ARE linear combinations of images under A of the ej, so every vector spanned by the columns of A is the image of a vector spanned by the standard basis.
conversely, if v is a vector in R^n, then v = a1e1+a2e2+...anen, so A(v) = a1A(e1)+a2A(e2)+...+anA(en), and A(ej) IS the j-th column of A (that's what ej does, it picks out the j-th
entry in every row). so any element in the range of A is in the span of the columns of A.
all this wholesome goodness happens because matrices induce linear maps from R^n to R^m, the map v-->A(v) is linear. and ultimately the linearity is built-in for matrices because of the distributivity in the underlying field of the vector space R^n (in this case, R), and the compatibility of scalar multiplication with vector additon.
in other words, in any linear algebra setting, there are two ways of looking at things: numerically (based on matrices, coordinates, orthogonality), and algebraically (based on linearity, bases and inner products). engineering and applications of linear algebra tend to emphasize the computational elements, the vector (1,2,4) the matrix A. for each one of these notions there is a more abstract way of looking at each one (a physicist thinks of a null space, a mathematician thinks "kernel").
Yes, it does.
I think what you mean here is that you can either work with a concrete example or you can work in generality, or abstraction. The null space and kernel are just different names for the same thing; this not an example illustrating the differences between mathematicians and physicists. A better example would be the Dirac delta function. The way it was defined by Dirac, it made no sense mathematically. He didn't care however because it worked. It was the mathematician Schwartz who made sense of it.in other words, in any linear algebra setting, there are two ways of looking at things: numerically (based on matrices, coordinates, orthogonality), and algebraically (based on linearity, bases and inner products). engineering and applications of linear algebra tend to emphasize the computational elements, the vector (1,2,4) the matrix A. for each one of these notions there is a more abstract way of looking at each one (a physicist thinks of a null space, a mathematician thinks "kernel").
i read (part of) a book the other day called "group theory for physicists". it got away from homomorphisms just as soon as possible, and went hell-bent for leather on representations and character tables.
isn't the dirac delta (what-ever-it-is, distribution, i guess) awesome? i mean, physically, we know that something like a "unit pulse" exists, which points to the notion of function not capturing the intuition we thought it did.
i don't mean to slight physicists, btw. some of them know a good deal more mathematics than i do. i was just trying to illustrate the difference between "concrete" and "abstract". for an abstract theory to be meaningful, there ought to be a faithful concrete representation of it. for a concrete theory to be logical, there ought to be an elegant abstract framework beneath it.
Well, the delta function is not a function; it's a distribution as you say. There is no function zero everywhere except at a single point but whose integral is 1.
I guess!i don't mean to slight physicists, btw. some of them know a good deal more mathematics than i do. i was just trying to illustrate the difference between "concrete" and "abstract". for an abstract theory to be meaningful, there ought to be a faithful concrete representation of it. for a concrete theory to be logical, there ought to be an elegant abstract framework beneath it.