what we would like to do is define

.

first we need to be sure that this is well-defined, since we are working with cosets, instead of elements.

so suppose y is in x + V (that is, that y + V = x + V), so that y = x + v, for some vector v in V. then f(y) = f(x + v) = f(x) + f(v) = f(x) + v',

(where v' is some element of V) since V is invariant for f, so f(y) is in f(x) + V, thus f(y) + V = f(x) + V.

hence

, so

is indeed well-defined.

from here. it's all down-hill, the linearity of

is a direct consequence of the linearity of f:

*******

it might be helpful to see a simple concrete example.

let

, the Euclidean plane, with the usual vector operations, and suppose

. what do the elements of V/X look like?

well anything in (x,y) + X has a 2nd coordinate of y, so the elements of V/X are all horizontal lines (we get one for each different real number y).

so suppose f(x,y) = (3x+y, 2y). it should be clear X is an invariant subspace for f.

then

is the mapping that takes the line going through y, to the line going through 2y. in other words,

acts "just like" the function a-->2a (of one real variable).

the reason being, when we act "mod X", we are "shrinking" the entire x-dimension down to 0. so what f does on the first coordinate becomes irrelelvant, as far as

is concerned.