Prove number of solutions to Ax=b two ways: rank nullity formula and RRE form and ERO

http://latex.codecogs.com/gif.latex?A is a matrix and is a linear transformation

1)Show that with either has no solution or has infinitely many

2)Show that with http://latex.codecogs.com/gif.latex?A has rank always has a solution

3)Show that with http://latex.codecogs.com/gif.latex?A has rank has at most one solution

4)Show that where and http://latex.codecogs.com/gif.latex?A has rank has precisely one solution

I'm not sure how to start on either method with 1)

For 2) doing the RRE form method, http://latex.codecogs.com/gif.latex?A can be reduced down to a matrix with no nonzero rows, but I get stuck there.

I'm confused about 3, if http://latex.codecogs.com/gif.latex?A has rank does that mean that ?

4) I understand the RRE form method because it can be reduced to the identity matrix, but I don't know how to prove it using the rank nullity formula

Any help would be very much appreciated!

Re: Prove number of solutions to Ax=b two ways: rank nullity formula and RRE form and

Hi Rachel123,

For 1) suppose is a solution. Now looking at the rank-nullity theorem might be helpful. Since what can we conclude about the kernel of

I think once the answer to this question is established you'll see what to do. If this is too cryptic let me know and I'll give further hints. Good luck!

Re: Prove number of solutions to Ax=b two ways: rank nullity formula and RRE form and

1) this is the same as saying: if Ax = b has even one solution, it has infinitely many (one and only one is not an option).

suppose that m < n. the rank-nullity theorem tells us that:

n = rank(A) + dim(ker(A))

since rank(A) = dim(im(T)) ≤ m < n, this tells us that dim(ker(A)) ≥ 1. in particular, there must be some non-zero vector v in the nullspace of A.

it then stands to reason that we have an infinite number of vectors in the nullspace of A, av, for every non-zero real number a:

A(av) = a(Av) = a(0) = 0 (with 0 meaning the 0-vector of R^{m}).

if we have one vector x with Ax = b, then we have the infinite number of vectors {x + av: v in ker(A), a in R*}.

and A(x + av) = A(x) + A(av) = b + 0 = b. note that x ≠ x + av, for ANY non-zero a, because then we get:

0 = av, contradicting that a is a non-zero real number, and v is a non-zero vector.

for 2) suppose that A is of rank m. doesn't this mean that im(T) is all of R^{m} (that is, that T is onto)?

for 3) yes, for rank(A) = n, m has to be "big enough" for R^{m} to fit an n-dimensional subspace (im(T)) inside it. what must the nullspace of A be, in this case?

for 4) in this case, we have: n = rank(A) + dim(ker(A)) = n + dim(ker(A)). what does dim(ker(A)) = 0 tell us about how big the nullspace is?

can we use (2) and (3) in some special way here?

Re: Prove number of solutions to Ax=b two ways: rank nullity formula and RRE form and

Thanks for the explanation of 1). 2) seems pretty clear too now

For 3) the nullspace of A consists of just the empty set, right? So there does not exist and vector v such that Av=0 and then it's just the reverse argument of A. Hence, if there does exist a solution, it is the only one?

Oh, and then 4 can be shown using logic from 2) and 3)

Thank you!

And I've figured out the reduced row echelon form methods too now, so thanks! :)

Re: Prove number of solutions to Ax=b two ways: rank nullity formula and RRE form and

no the BASIS of the nullspace is the empty set. the nullspace itself is not empty it contains the 0-vector (trivially, A(0) = 0, for any matrix A, or indeed for any linear transformation T). remember, the nullspace of A IS a subspace of R^{n} and subspaces can't be empty.

now suppose we want to use RREF instead. let's look at that.

1) we start with a matrix A, and put it in RREF by using ERO. since A only has m rows, it can at most have rank m or less (m non-zero rows in RREF).

the matrices we use to put A in RREF are all invertible, so we wind up with a matrix A' = PA (for some invertible matrix P representing the accumulated ERO).

note that if Ax = b has a solution, let's call it x_{0}, so does A'x = b' (where b' is the b with the same ERO done to IT, that is Pb):

A'x_{0} = PAx_{0} = Pb (usually A|b are combined as an AUGMENTED matrix, and put in RREF form to get an augmented matrix A'|b'), that is every solution to Ax = b is also a solution to A'x = b' (that is PAx = Pb).

the non-zero rows of A' (= RREF(A)) all start with a leading 1. we have rank(A) ≤ m of these, so at most m columns in which these leading 1's appear (these are called "pivot columns").

let's look at what our matrix equation looks like:

let's suppose Ax = b has some solution, so that this has some solution, too. what is going to be of interest to us are the solutions to:

A'x = 0. note we now have k ≤ m < n equations in n unknowns (one for each non-zero row of A'). look at the last non-zero row of A', which gives us an equation for the k-th entry of A'x (which will be 0):

(the first few a_{kj} will be 0, and the first non-zero a_{kj} will be 1).

this tells us that for some j:

. this tells us no matter how we choose the last n-j coordinates for x, that determines our choice for x_{j}.

going up to the row above, we have:

for some j' < j.

this means no matter how we choose the last n-j' coordinates for x, this determines our choice of x_{j'}.

so pick the last n-j coordinates of x arbitrarily, which will determine x_{j}, then pick the next j-j'-1 coordinates arbitrarily, which will then determine x_{j'}

(so far we have 2 coordinates determined by (n-j) + (j-j'-1) = n-j'-1 coordinates).

with the row above, and j" < j' < 1, we get: 3 coordinates determined by (n-j) + (j-j'-1) + (j'-j"-1) = n-j"-2 coordinates. eventually, we'll reach row 1,

continuing this process for each non-zero row of A', which will wind up giving us k coordinates of x that are determined by:

(n-1) - (k-1) = n-k coordinates. so we can pick n-k coordinates arbitrarily, and still have a vector x with A'x = 0. that is, we will have n-k (where k = rank(A)) "free parameters".

note that k ≤ m < n, so n - k is greater than 0. that means we can pick ONE of these free parameters to be non-zero, which will gives us a non-zero solution x_{0} to A'x_{0} = 0.

but then, for any real number c, A'(cx_{0}) = 0, as well. and if c ≠ 0, then cx_{0} ≠ 0, either. so we get at least as many solutions to A'x = 0, as there are non-zero real numbers c.

now since we have a solution (let's call it y, since i don't want to "re-use" letters), to A'x = b',

A'(y + cx_{0}) = A'y + A(cx_{0}) = b' + c(Ax_{0}) = b' + c0 = b' + 0 = b'. so we have an infinite number of solutions y+cx_{0} to A'x = b'.

finally (!), since P is invertible: we have that P^{-1}b' = P^{-1}(Pb) = Ib = b.

so A(y+cx_{0}) = IA(y+cx_{0}) = (P^{-1}P)(A(y+cx_{0})) = P^{-1}(PA(y+cx_{0})) = P^{-1}(A'(y+cx_{0}))

= P^{-1}(b') = b, so each of the infinite solutions y+cx_{0}to A'x = b' is a solution to Ax = b.

(more involved than i expected. my bad.)