# Lin Alg Proofs and Counterexamples

• May 2nd 2010, 02:13 PM
dwsmith
Lin Alg Proofs and Counterexamples
I have compiled 57 prove or disprove lin alg questions for my final; however, these may be useful to all.

Contributors to some of the solutions are HallsofIvy, Failure, Tikoloshe, jakncoke, tonio, and Defunkt.

If you discover any errors in one of the solutions, then feel free to reply with the number and correction.

Moderator Edit:
1. If you want to thank dwsmith, please click on the Thanks button (do NOT post replies here unless you have a suggestion or erratum).
2. The original thread can be viewed at http://www.mathhelpforum.com/math-he...tml#post511378.
• May 17th 2010, 01:13 PM
dwsmith
Here is a general proof for a Vector Space that shows when $k+1$ will be lin. ind. and lin. dep.

Let $\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k$ be lin. ind. vectors in V. If we add a vector $\mathbf{x}_{k+1}$, do we still have a set of lin. ind. vectors?

(i) Assume $\mathbf{x}_{k+1}\in$ $Span (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k)$

$\displaystyle\mathbf{x}_{k+1}=c_1\mathbf{x}_{1}+.. .+c_k\mathbf{x}_{k}$

$\displaystyle c_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{k}+c_{k+1}\ma thbf{x}_{k+1}=0$

$\displaystyle c_{k+1}=-1$

$\displaystyle -1\neq 0$; therefore, $(\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k, \mathbf{x}_{k+1})$ are lin. dep.

(ii) Assume $\mathbf{x}_{k+1}\notin$ $Span (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k)$

$\displaystylec_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{ k}+c_{k+1}\mathbf{x}_{k+1}=0$

$\displaystyle c_{k+1}=0$ otherwise $\displaystyle \mathbf{x}_{k+1}=\frac{-c_1}{c_{k+1}}\mathbf{x}+....+\frac{-c_k}{c_{k+1}}\mathbf{x}_k$ which is a contradiction.

$\displaystyle c_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{k}+c_{k+1}\ma thbf{x}_{k+1}=0$. Since $(\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k, \mathbf{x}_{k+1})$ are lin ind., $c_1=...c_{k+1}=0$.
• May 17th 2010, 08:56 PM
dwsmith
A is matrix n x n over field F, similar to an upper triangular matrix iff. the characteristic polynomial can be factored into an expression of the form $\displaystyle (\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)$

$\displaystyle det(A)=(a_{11}-\lambda)A_{11}+\sum_{i=2}^{n}a_{i1}A_{i1}$

$\displaystyle (a_{11}-\lambda)A_{11}=(a_{11}-\lambda)(a_{22}-\lambda)...(a_{nn}-\lambda)$

$\displaystyle =(-1)^n\lambda^n+...+(-1)^{n-1}\lambda^{n-1}$

$\displaystyle p(0)=det(A)=\lambda_1\lambda_2...\lambda_n$

$\displaystyle (-1)^{n-1}=tr(A)=\sum_{i=1}^{n}\lambda_i$

$\displaystyle p(\lambda)=0$ has exactly n solutions $\lambda_1,...,\lambda_n$

$\displaystyle p(\lambda)=(\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)$

$\displaystyle p(0)=(\lambda_1)(\lambda_2)...(\lambda_n)=det(A)$
• May 18th 2010, 11:01 PM
dwsmith
Let be $A$ be a nxn matrix and $B=I-2A+A^2$. Show that if $\mathbf{x}$ is an eigenvector of A belonging to an eigenvalue $\lambda$ of A, then $\mathbf{x}$ is also an eigenvector of B belonging to an eigenvalue $\mu$ of B.

$B\mathbf{x}=(I-2A+A^2)\mathbf{x}=\mathbf{x}-2A\mathbf{x}+A^2\mathbf{x}=\mathbf{x}-2\lambda\mathbf{x}+A(\lambda\mathbf{x})=\mathbf{x}-2\lambda\mathbf{x}+(A\mathbf{x})\lambda$ $=\mathbf{x}-2\lambda\mathbf{x}+\lambda^2\mathbf{x}=(1-2\lambda+\lambda^2)\mathbf{x}=\mu\mathbf{x}$

Hence, $\mu=(1-2\lambda+\lambda^2)$
• May 20th 2010, 06:49 PM
dwsmith
Let $\lambda$ be an eigenvalue $A$ and let $\mathbf{x}$ be an eigenvector belonging to $\lambda$. Use math induction to show that, for $m\geq 1$, $\lambda^m$ is an eigenvalue of $A^m$ and $\mathbf{x}$ is an eigenvector of $A^m$ belonging to $\lambda^m$.

$A\mathbf{x}=\lambda\mathbf{x}$

$p(k):=A^k\mathbf{x}=\lambda^k\mathbf{x}$

$p(1):=A\mathbf{x}=\lambda\mathbf{x}$

$p(k+1):=A^{k+1}\mathbf{x}=\lambda^{k+1}\mathbf{x}$

Assume $p(k)$ is true.

Since $p(k)$ is true, then $p(k+1):=A^{k+1}\mathbf{x}=\lambda^{k+1}\mathbf{x}$.

$A^{k+1}\mathbf{x}=A^kA\mathbf{x}=A^k(\lambda\mathb f{x})=\lambda(A^k\mathbf{x})=\lambda\lambda^k\math bf{x}=\lambda^{k+1}\mathbf{x}$

By induction, $A^k\mathbf{x}=\lambda^k\mathbf{x}$.
• Feb 4th 2011, 06:37 PM
Ackbeet

1. From Test 5, Problem 4, on page 4. I would say more than eigenvectors must be nonzero, by definition. It's not that the zero eigenvector case is trivial: it's that it's not allowed.

2. Page 6, Problem 8: typo in problem statement. Change "I of -I" to "I or -I".

3. Page 8, Problem 21: the answer is correct, but the reasoning is incorrect. It is not true that $\mathbf{x}$ and $\mathbf{y}$ are linearly independent if and only if $|\mathbf{x}^{T}\mathbf{y}|=0.$ That is the condition for orthogonality, which is a stronger condition than linear independence. Counterexample: $\mathbf{x}=(\sqrt{2}/2)(1,1),$ and $\mathbf{y}=(1,0).$ Both are unit vectors, as stipulated. We have that $|\mathbf{x}^{T}\mathbf{y}|=\sqrt{2}/2\not=0,$ and yet
$a\mathbf{x}+b\mathbf{y}=\mathbf{0}$ requires $a=b=0,$ which implies linear independence.

Instead, the argument should just produce a simple counterexample, such as $\mathbf{x}=\mathbf{y}=(1,0)$.

Good work, though!
• May 27th 2011, 02:33 AM
sorv1986
Quote:

Originally Posted by dwsmith
A is matrix n x n over field F, similar to an upper triangular matrix iff. the characteristic polynomial can be factored into an expression of the form $\displaystyle (\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)$

$\displaystyle det(A)=(a_{11}-\lambda)A_{11}+\sum_{i=2}^{n}a_{i1}A_{i1}$

$\displaystyle (a_{11}-\lambda)A_{11}=(a_{11}-\lambda)(a_{22}-\lambda)...(a_{nn}-\lambda)$

$\displaystyle =(-1)^n\lambda^n+...+(-1)^{n-1}\lambda^{n-1}$

$\displaystyle p(0)=det(A)=\lambda_1\lambda_2...\lambda_n$

$\displaystyle (-1)^{n-1}=tr(A)=\sum_{i=1}^{n}\lambda_i$

$\displaystyle p(\lambda)=0$ has exactly n solutions $\lambda_1,...,\lambda_n$

$\displaystyle p(\lambda)=(\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)$

$\displaystyle p(0)=(\lambda_1)(\lambda_2)...(\lambda_n)=det(A)$

A is matrix n x n
over field F, similar
to an upper
triangular matrix
iff. the
minimal
polynomial is a product of linear factors
• Nov 6th 2011, 04:41 PM
carlosgrahm
Re: Lin Alg Proofs and Counterexamples
Great post thank you
• Dec 15th 2011, 06:58 PM
dwsmith
I have a workbook of 100 proofs I am typing up for Linear Alg. I am not sure if all the solutions are correct. However, I will post what I have done 15 problems so far for review and correct errors as they are found. Once they are all done, I will add them to other sticky of proofs I have up already.

Deveno, Drexel, Pickslides, FernandoRevilla, and Ackbeet have helped with some of the problems already.

Updated pdf with more solutions

Updated pdf file.
• Dec 9th 2017, 09:45 AM
shahul
Re: Lin Alg Proofs and Counterexamples
I have a question which seems to be little elementary. But if someone gives me a proof or explanations I would be happy and indebted to them for that.

Though it seems to be elementary I hope you would clear the doubt if your precious time permits.

My doubt is regarding the rank of a matrix. Let A be a matrix of order mXn and B be a matrix of order nXp with entries from a field of charactersitic zero. If A is a full row rank matrix (need not be a square matrix) then is it true that rank(AB)=rank(B)? Does it depend on the field? I ask this question because in the field of complex numbers I have a counter example.

Take $A=\begin{pmatrix} 2 & \dfrac{1+\sqrt(3/5)i}{2} & \dfrac{1-\sqrt(3/5)i}{2}\\ 1 & 1 & 1 \end{pmatrix}$ and $B=\begin{pmatrix} 1/2 & 1\\ \dfrac{2}{1+\sqrt(3/5)i} & 1\\ \dfrac{2}{1-\sqrt(3/5)i} & 1 \end{pmatrix}$. See that rank(A)=rank(B)=2, But rank(AB)=1. Is the result not true for $F=\mathbb{C}$?
• Dec 9th 2017, 10:37 AM
Idea
Re: Lin Alg Proofs and Counterexamples
Quote:

Originally Posted by shahul
I have a question which seems to be little elementary. But if someone gives me a proof or explanations I would be happy and indebted to them for that.

Though it seems to be elementary I hope you would clear the doubt if your precious time permits.

My doubt is regarding the rank of a matrix. Let A be a matrix of order mXn and B be a matrix of order nXp with entries from a field of charactersitic zero. If A is a full row rank matrix (need not be a square matrix) then is it true that rank(AB)=rank(B)? Does it depend on the field? I ask this question because in the field of complex numbers I have a counter example.

Take $A=\begin{pmatrix} 2 & \dfrac{1+\sqrt(3/5)i}{2} & \dfrac{1-\sqrt(3/5)i}{2}\\ 1 & 1 & 1 \end{pmatrix}$ and $B=\begin{pmatrix} 1/2 & 1\\ \dfrac{2}{1+\sqrt(3/5)i} & 1\\ \dfrac{2}{1-\sqrt(3/5)i} & 1 \end{pmatrix}$. See that rank(A)=rank(B)=2, But rank(AB)=1. Is the result not true for $F=\mathbb{C}$?

another example

$A=\begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$

$B=\begin{pmatrix} 1 & 0\\ 0 & 1\\ 0 & 1 \end{pmatrix}$

over the real numbers.

In general rank(AB) $\leq$ min{rank A, rank B}
• Dec 9th 2017, 10:14 PM
shahul
Re: Lin Alg Proofs and Counterexamples
Yes #Idea. But would anybody help me to see what are the conditions for the matrix $A$ to have rank(AB)=rank(B)?
• Dec 9th 2017, 10:22 PM
shahul
Re: Lin Alg Proofs and Counterexamples
Actually my question is to prove a conjecture. It is as follows:

Given a matrix $A$ of order $m\times n$ with entries from a field of characteristic zero. Define $A^\theta$ as the transpose of the matrix obtained from $A$ by replacing each of its non-zero elements by their inverse and leave zeros as such. Show that $Rank(AA^\theta)=Rank(A^\theta A)=min\{Rank(A),Rank(A^\theta)\}$.

As per my example It is found to be false for the field of complex numbers. Will it be true for the field of real numbers?
• Dec 10th 2017, 02:29 AM
Idea
Re: Lin Alg Proofs and Counterexamples
Try the matrix

$A=\begin{pmatrix} 0 & 0 & 2 & 1\\ 2 & 2 & 2 & 2\\ 1 & 2 & 0 & 1\end{pmatrix}$
• Dec 10th 2017, 09:19 AM
shahul
Re: Lin Alg Proofs and Counterexamples
Thank you so much for this eye-opener.