Lin Alg Proofs and Counterexamples

dwsmith

MHF Hall of Honor
Mar 2010
3,093
582
Florida
I have compiled 57 prove or disprove lin alg questions for my final; however, these may be useful to all.

Contributors to some of the solutions are HallsofIvy, Failure, Tikoloshe, jakncoke, tonio, and Defunkt.

If you discover any errors in one of the solutions, then feel free to reply with the number and correction.


Moderator Edit:
1. If you want to thank dwsmith, please click on the Thanks button (do NOT post replies here unless you have a suggestion or erratum).
2. The original thread can be viewed at http://www.mathhelpforum.com/math-help/linear-abstract-algebra/142686-lin-alg-proofs-counterexamples-2.html#post511378.
 

Attachments

Last edited by a moderator:

dwsmith

MHF Hall of Honor
Mar 2010
3,093
582
Florida
Here is a general proof for a Vector Space that shows when \(\displaystyle k+1\) will be lin. ind. and lin. dep.

Let \(\displaystyle \mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k\) be lin. ind. vectors in V. If we add a vector \(\displaystyle \mathbf{x}_{k+1}\), do we still have a set of lin. ind. vectors?

(i) Assume \(\displaystyle \mathbf{x}_{k+1}\in\) \(\displaystyle Span (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k)\)

\(\displaystyle \displaystyle\mathbf{x}_{k+1}=c_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{k}\)

\(\displaystyle \displaystyle c_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{k}+c_{k+1}\mathbf{x}_{k+1}=0\)

\(\displaystyle \displaystyle c_{k+1}=-1\)

\(\displaystyle \displaystyle -1\neq 0\); therefore, \(\displaystyle (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k, \mathbf{x}_{k+1})\) are lin. dep.

(ii) Assume \(\displaystyle \mathbf{x}_{k+1}\notin\) \(\displaystyle Span (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k)\)

\(\displaystyle \displaystylec_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{k}+c_{k+1}\mathbf{x}_{k+1}=0\)

\(\displaystyle \displaystyle c_{k+1}=0\) otherwise \(\displaystyle \displaystyle \mathbf{x}_{k+1}=\frac{-c_1}{c_{k+1}}\mathbf{x}+....+\frac{-c_k}{c_{k+1}}\mathbf{x}_k\) which is a contradiction.

\(\displaystyle \displaystyle c_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{k}+c_{k+1}\mathbf{x}_{k+1}=0\). Since \(\displaystyle (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k, \mathbf{x}_{k+1})\) are lin ind., \(\displaystyle c_1=...c_{k+1}=0\).
 
Last edited by a moderator:

dwsmith

MHF Hall of Honor
Mar 2010
3,093
582
Florida
A is matrix n x n over field F, similar to an upper triangular matrix iff. the characteristic polynomial can be factored into an expression of the form \(\displaystyle \displaystyle (\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)\)

\(\displaystyle \displaystyle det(A)=(a_{11}-\lambda)A_{11}+\sum_{i=2}^{n}a_{i1}A_{i1}\)

\(\displaystyle \displaystyle (a_{11}-\lambda)A_{11}=(a_{11}-\lambda)(a_{22}-\lambda)...(a_{nn}-\lambda)\)

\(\displaystyle \displaystyle =(-1)^n\lambda^n+...+(-1)^{n-1}\lambda^{n-1}\)

\(\displaystyle \displaystyle p(0)=det(A)=\lambda_1\lambda_2...\lambda_n\)

\(\displaystyle \displaystyle (-1)^{n-1}=tr(A)=\sum_{i=1}^{n}\lambda_i\)

\(\displaystyle \displaystyle p(\lambda)=0\) has exactly n solutions \(\displaystyle \lambda_1,...,\lambda_n\)

\(\displaystyle \displaystyle p(\lambda)=(\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)\)

\(\displaystyle \displaystyle p(0)=(\lambda_1)(\lambda_2)...(\lambda_n)=det(A)\)
 
Last edited by a moderator:

dwsmith

MHF Hall of Honor
Mar 2010
3,093
582
Florida
Let be \(\displaystyle A\) be a nxn matrix and \(\displaystyle B=I-2A+A^2\). Show that if \(\displaystyle \mathbf{x}\) is an eigenvector of A belonging to an eigenvalue \(\displaystyle \lambda\) of A, then \(\displaystyle \mathbf{x}\) is also an eigenvector of B belonging to an eigenvalue \(\displaystyle \mu\) of B.

\(\displaystyle B\mathbf{x}=(I-2A+A^2)\mathbf{x}=\mathbf{x}-2A\mathbf{x}+A^2\mathbf{x}=\mathbf{x}-2\lambda\mathbf{x}+A(\lambda\mathbf{x})=\mathbf{x}-2\lambda\mathbf{x}+(A\mathbf{x})\lambda\)\(\displaystyle =\mathbf{x}-2\lambda\mathbf{x}+\lambda^2\mathbf{x}=(1-2\lambda+\lambda^2)\mathbf{x}=\mu\mathbf{x}\)

Hence, \(\displaystyle \mu=(1-2\lambda+\lambda^2)\)
 
Last edited by a moderator:

dwsmith

MHF Hall of Honor
Mar 2010
3,093
582
Florida
Let \(\displaystyle \lambda\) be an eigenvalue \(\displaystyle A\) and let \(\displaystyle \mathbf{x}\) be an eigenvector belonging to \(\displaystyle \lambda\). Use math induction to show that, for \(\displaystyle m\geq 1\), \(\displaystyle \lambda^m\) is an eigenvalue of \(\displaystyle A^m\) and \(\displaystyle \mathbf{x}\) is an eigenvector of \(\displaystyle A^m\) belonging to \(\displaystyle \lambda^m\).

\(\displaystyle A\mathbf{x}=\lambda\mathbf{x}\)

\(\displaystyle p(k):=A^k\mathbf{x}=\lambda^k\mathbf{x}\)

\(\displaystyle p(1):=A\mathbf{x}=\lambda\mathbf{x}\)

\(\displaystyle p(k+1):=A^{k+1}\mathbf{x}=\lambda^{k+1}\mathbf{x}\)

Assume \(\displaystyle p(k)\) is true.

Since \(\displaystyle p(k)\) is true, then \(\displaystyle p(k+1):=A^{k+1}\mathbf{x}=\lambda^{k+1}\mathbf{x}\).

\(\displaystyle A^{k+1}\mathbf{x}=A^kA\mathbf{x}=A^k(\lambda\mathbf{x})=\lambda(A^k\mathbf{x})=\lambda\lambda^k\mathbf{x}=\lambda^{k+1}\mathbf{x}\)

By induction, \(\displaystyle A^k\mathbf{x}=\lambda^k\mathbf{x}\).
 
Last edited by a moderator:

Ackbeet

MHF Hall of Honor
Jun 2010
6,318
2,433
CT, USA
Very nice review! I just had a few comments:

1. From Test 5, Problem 4, on page 4. I would say more than eigenvectors must be nonzero, by definition. It's not that the zero eigenvector case is trivial: it's that it's not allowed.

2. Page 6, Problem 8: typo in problem statement. Change "I of -I" to "I or -I".

3. Page 8, Problem 21: the answer is correct, but the reasoning is incorrect. It is not true that \(\displaystyle \mathbf{x}\) and \(\displaystyle \mathbf{y}\) are linearly independent if and only if \(\displaystyle |\mathbf{x}^{T}\mathbf{y}|=0.\) That is the condition for orthogonality, which is a stronger condition than linear independence. Counterexample: \(\displaystyle \mathbf{x}=(\sqrt{2}/2)(1,1),\) and \(\displaystyle \mathbf{y}=(1,0).\) Both are unit vectors, as stipulated. We have that \(\displaystyle |\mathbf{x}^{T}\mathbf{y}|=\sqrt{2}/2\not=0,\) and yet
\(\displaystyle a\mathbf{x}+b\mathbf{y}=\mathbf{0}\) requires \(\displaystyle a=b=0,\) which implies linear independence.

Instead, the argument should just produce a simple counterexample, such as \(\displaystyle \mathbf{x}=\mathbf{y}=(1,0)\).

Good work, though!
 
Last edited by a moderator:
  • Like
Reactions: dwsmith
May 2011
35
3
kolkata, india
A is matrix n x n over field F, similar to an upper triangular matrix iff. the characteristic polynomial can be factored into an expression of the form \(\displaystyle \displaystyle (\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)\)

\(\displaystyle \displaystyle det(A)=(a_{11}-\lambda)A_{11}+\sum_{i=2}^{n}a_{i1}A_{i1}\)

\(\displaystyle \displaystyle (a_{11}-\lambda)A_{11}=(a_{11}-\lambda)(a_{22}-\lambda)...(a_{nn}-\lambda)\)

\(\displaystyle \displaystyle =(-1)^n\lambda^n+...+(-1)^{n-1}\lambda^{n-1}\)

\(\displaystyle \displaystyle p(0)=det(A)=\lambda_1\lambda_2...\lambda_n\)

\(\displaystyle \displaystyle (-1)^{n-1}=tr(A)=\sum_{i=1}^{n}\lambda_i\)

\(\displaystyle \displaystyle p(\lambda)=0\) has exactly n solutions \(\displaystyle \lambda_1,...,\lambda_n\)

\(\displaystyle \displaystyle p(\lambda)=(\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)\)

\(\displaystyle \displaystyle p(0)=(\lambda_1)(\lambda_2)...(\lambda_n)=det(A)\)
A is matrix n x n
over field F, similar
to an upper
triangular matrix
iff. the
minimal
polynomial is a product of linear factors
 
Last edited by a moderator:

dwsmith

MHF Hall of Honor
Mar 2010
3,093
582
Florida
I have a workbook of 100 proofs I am typing up for Linear Alg. I am not sure if all the solutions are correct. However, I will post what I have done 15 problems so far for review and correct errors as they are found. Once they are all done, I will add them to other sticky of proofs I have up already.

Deveno, Drexel, Pickslides, FernandoRevilla, and Ackbeet have helped with some of the problems already.

Updated pdf with more solutions

Updated pdf file.
 

Attachments

Last edited by a moderator:
Dec 2017
4
0
India
I have a question which seems to be little elementary. But if someone gives me a proof or explanations I would be happy and indebted to them for that.

Though it seems to be elementary I hope you would clear the doubt if your precious time permits.

My doubt is regarding the rank of a matrix. Let A be a matrix of order mXn and B be a matrix of order nXp with entries from a field of charactersitic zero. If A is a full row rank matrix (need not be a square matrix) then is it true that rank(AB)=rank(B)? Does it depend on the field? I ask this question because in the field of complex numbers I have a counter example.

Take $A=\begin{pmatrix} 2 & \dfrac{1+\sqrt(3/5)i}{2} & \dfrac{1-\sqrt(3/5)i}{2}\\ 1 & 1 & 1 \end{pmatrix}$ and $B=\begin{pmatrix} 1/2 & 1\\ \dfrac{2}{1+\sqrt(3/5)i} & 1\\ \dfrac{2}{1-\sqrt(3/5)i} & 1 \end{pmatrix}$. See that rank(A)=rank(B)=2, But rank(AB)=1. Is the result not true for $F=\mathbb{C}$?