# Thread: Injectivity of the scalar product

1. ## Injectivity of the scalar product

I want to show that $\displaystyle \hat{A}: V^* \times V \rightarrow \mathbb{R}$ (where $\displaystyle V$ is a vector space and $\displaystyle V^*$ is it's dual space) defined by:

$\displaystyle \hat{A}(v^*,v) \equiv <v^*, Av>$

is injective.

I have a proof from a book, but I don't quite understand what it's doing. It starts by:

Let $\displaystyle \hat{A}=0$. Then $\displaystyle <v^*,Av>=0 \ \forall v^* \in V^*, \ v \in V$.

Now since the scalar product is definite, this implies $\displaystyle Av=0 \ \forall v \in V$ and thus $\displaystyle A=0$.
Haven't they just proven it for just 0?

Also, what does "scalar product is definite" mean?

2. Originally Posted by Showcase_22
I want to show that $\displaystyle \hat{A}: V^* \times V \rightarrow \mathbb{R}$ (where $\displaystyle V$ is a vector space and $\displaystyle V^*$ is it's dual space) defined by:

$\displaystyle \hat{A}(v^*,v) \equiv <v^*, Av>$

is injective.

What is $\displaystyle A$ here? An invertible operator (matrix), perhaps? Otherwise the redaction of the lemma/proposition/theorem is incomplete.

Tonio

I have a proof from a book, but I don't quite understand what it's doing. It starts by:

Haven't they just proven it for just 0?

Also, what does "scalar product is definite" mean?
.

3. Oh sorry yes!

A is an endomorphism of V.

It probably would have helped if I posted that earlier =S

4. i don't think the question is what you gave us. my guess is that the question is to prove that the map $\displaystyle \varphi : \text{End}_{\mathbb{R}}V \longrightarrow (V^* \times V)^*$ defined by $\displaystyle \varphi (A)=\hat{A}$ is injective.

5. Originally Posted by NonCommAlg
i don't think the question is what you gave us. my guess is that the question is to prove that the map $\displaystyle \varphi : \text{End}_{\mathbb{R}}V \longrightarrow (V^* \times V)^*$ defined by $\displaystyle \varphi (A)=\hat{A}$ is injective.

I agree: this seems to be way sounder. Anyway, to the OP: can we know what book did you take this question from?

Tonio

6. Wow! I may mean that, but I have never seen that notation before!

The book is "Introduction to Vectors and Tensors: Volume 1". The authors are Ray M Bowen and C C Wang.

I'm trying to show that an isomorphism exists between $\displaystyle \vartheta_1^1(V)$ (the set of tensors of order (1,1)) and $\displaystyle L(V;V)$ (the set of linear maps from V to V).

The first part works quite well. We know that $\displaystyle \dim \vartheta_1^1(V)=N^2$ where $\displaystyle \dim V = N$.

Now we have to show that $\displaystyle \dim L(V;V)=N^2$. To do this let $\displaystyle e_1, \ldots , e_N$ be a basis for V. Define $\displaystyle N^2$ linear transformations $\displaystyle A^k_{\apha} :V \rightarrow U$ by

$\displaystyle A^k_{\alpha}e_k=e_{\alpha}$ for $\displaystyle k, \alpha=1, \ldots , N$
$\displaystyle A^k_{\alpha} e_p=0$ for $\displaystyle k \neq p$

If A is an arbitrary member of $\displaystyle L(V;V)$ then $\displaystyle Ae_k \in V$ so

$\displaystyle Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}$ where $\displaystyle k=1, \ldots ,N$.

But we also know that $\displaystyle A^k_{\alpha}e_k=e_{\alpha}$. This gives that:

$\displaystyle Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}=\sum_{\alpha=1}^N A^{\alpha}_k A^k_{\alpha}e_k=\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha}e_k$

We can rearrange this to get:

$\displaystyle \left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)e_k \right)=0$ valid $\displaystyle \forall e_k \in \{e_1, \ldots ,e_N \}$

But we know that A is a linear map so the above statement is valid for any $\displaystyle v \in V$. ie.

$\displaystyle \left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)v =0$ valid $\displaystyle \forall v \in V$

This implies that $\displaystyle A= \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}$ meaning that the $\displaystyle N^2$ linear transformations we defined at the start generate $\displaystyle L(V;V)$.

Now we have to prove that these transformations are linearly independent. We do this by setting

$\displaystyle \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}=0$.

From here we can use the $\displaystyle N^2$ linear transformations from before to get:

$\displaystyle \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}(e_p)=\sum_{\alpha=1}^MA^{\alpha}_p e_{\alpha}=0$.

But since $\displaystyle e_{\alpha}$ are basis vectors we know that $\displaystyle e_{\alpha} \neq 0$. Therefore $\displaystyle A^{\alpha}_p=0$ where $\displaystyle p, alpha = 1, \ldots ,M$.

Therefore the set of $\displaystyle A^s_{\alpha}$ is a basis of $\displaystyle L(V;V)$.

This gives that $\displaystyle \dim L(V;V) =\left( \dim V \right)^2=N^2$.

There is also a theorem in the book stating that if two vector spaces have the same dimension then an isomorphism exists between the two. In this case $\displaystyle \dim L(V;V)= \dim \vartheta_1^1(V)=N^2$ so an isomorphism exists by the theorem.

Finally, I actually want to find an example of an isomorphism between $\displaystyle L(V;V)$ and $\displaystyle \vartheta_1^1(V)$. The book then defines the function $\displaystyle \hat{A}: V^* \times V \rightarrow \mathbb{R}$ where $\displaystyle \hat{A}(v^*,v) \equiv <v^*, Av>$ and $\displaystyle A$ is an endomorphism of V.

Since the two vector spaces of the same dimensions, by the pigeonhole principle we have to show that $\displaystyle \hat{A}$ is injective (thus implying that it's a bijection).

The book does this by setting $\displaystyle \hat{A}=0 \Rightarrow <v^*,Av>=0 \ \forall v^* \in V^*$ and $\displaystyle v \in V$ (I thought this was confusing, how does this prove that $\displaystyle \hat{A}$ is injective?)

Since the scalar product is definite (?) this gives $\displaystyle Av=0 \ \forall v \in V$ and thus$\displaystyle A=0$.

Consequently the operation "hat" is an isomorphism.

The last part is what i'm particularly confused about. I don't see how setting it to 0 will show that it's injective.

Sorry for the lengthy post, I figured it would be better if I showed exactly what i've done so far!

7. Originally Posted by Showcase_22
Wow! I may mean that, but I have never seen that notation before!

The book is "Introduction to Vectors and Tensors: Volume 1". The authors are Ray M Bowen and C C Wang.

I'm trying to show that an isomorphism exists between $\displaystyle \vartheta_1^1(V)$ (the set of tensors of order (1,1)) and $\displaystyle L(V;V)$ (the set of linear maps from V to V).

The first part works quite well. We know that $\displaystyle \dim \vartheta_1^1(V)=N^2$ where $\displaystyle \dim V = N$.

Now we have to show that $\displaystyle \dim L(V;V)=N^2$. To do this let $\displaystyle e_1, \ldots , e_N$ be a basis for V. Define $\displaystyle N^2$ linear transformations $\displaystyle A^k_{\apha} :V \rightarrow U$ by

$\displaystyle A^k_{\alpha}e_k=e_{\alpha}$ for $\displaystyle k, \alpha=1, \ldots , N$
$\displaystyle A^k_{\alpha} e_p=0$ for $\displaystyle k \neq p$

If A is an arbitrary member of $\displaystyle L(V;V)$ then $\displaystyle Ae_k \in V$ so

$\displaystyle Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}$ where $\displaystyle k=1, \ldots ,N$.

But we also know that $\displaystyle A^k_{\alpha}e_k=e_{\alpha}$. This gives that:

$\displaystyle Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}=\sum_{\alpha=1}^N A^{\alpha}_k A^k_{\alpha}e_k=\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha}e_k$

We can rearrange this to get:

$\displaystyle \left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)e_k=0$ valid $\displaystyle \forall e_k \in \{e_1, \ldots ,e_N \}$

But we know that A is a linear map so the above statement is valid for any $\displaystyle v \in V$. ie.

$\displaystyle \left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)v =0$ valid $\displaystyle \forall v \in V$

This implies that $\displaystyle A= \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}$ meaning that the $\displaystyle N^2$ linear transformations we defined at the start generate $\displaystyle L(V;V)$.

Now we have to prove that these transformations are linearly independent. We do this by setting

$\displaystyle \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}=0$.

From here we can use the $\displaystyle N^2$ linear transformations from before to get:

$\displaystyle \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}(e_p)=\sum_{\alpha=1}^MA^{\alpha}_p e_{\alpha}=0$.

But since $\displaystyle e_{\alpha}$ are basis vectors we know that $\displaystyle e_{\alpha} \neq 0$. Therefore $\displaystyle A^{\alpha}_p=0$ where $\displaystyle p, alpha = 1, \ldots ,M$.

Therefore the set of $\displaystyle A^s_{\alpha}$ is a basis of $\displaystyle L(V;V)$.

This gives that $\displaystyle \dim L(V;V) =\left( \dim V \right)^2=N^2$.

I find it weird that you went through all this trouble to prove a very elementary fact from linear algebra which, I presume, must be assumed when you reach the necessary level to mess with tensors...

There is also a theorem in the book stating that if two vector spaces have the same dimension then an isomorphism exists between the two. In this case $\displaystyle \dim L(V;V)= \dim \vartheta_1^1(V)=N^2$ so an isomorphism exists by the theorem.

"...two vector spaces OVER THE SAME FIELD..."

Finally, I actually want to find an example of an isomorphism between $\displaystyle L(V;V)$ and $\displaystyle \vartheta_1^1(V)$. The book then defines the function $\displaystyle \hat{A}: V^* \times V \rightarrow \mathbb{R}$ where $\displaystyle \hat{A}(v^*,v) \equiv <v^*, Av>$ and $\displaystyle A$ is an endomorphism of V.

Since the two vector spaces of the same dimensions, by the pigeonhole principle we have to show that $\displaystyle \hat{A}$ is injective (thus implying that it's a bijection).

The book does this by setting $\displaystyle \hat{A}=0 \Rightarrow <v^*,Av>=0 \ \forall v^* \in V^*$ and $\displaystyle v \in V$ (I thought this was confusing, how does this prove that $\displaystyle \hat{A}$ is injective?)

Since the scalar product is definite (?) this gives $\displaystyle Av=0 \ \forall v \in V$ and thus$\displaystyle A=0$.

Consequently the operation "hat" is an isomorphism.

The last part is what i'm particularly confused about. I don't see how setting it to 0 will show that it's injective.

Sorry for the lengthy post, I figured it would be better if I showed exactly what i've done so far!

Wow! Well, I have the book by Bowen and oh my god! Nothing as physicists, engineers and "mathematics scientists"(??!?) to mess up big time with notation and make cumbersome and horrible VERY SIMPLE and beautiful stuff...reminded me why I chose, I choose and I will choose mathematics over physics/engineering/whatever forever!

In page 213 they actually defined $\displaystyle <v^{*},u>:=v^{*}(u)=$ the action of the map $\displaystyle v^{*}\in V^{*}$ on the vector $\displaystyle u\in V$. This is a rather standard notation, and then we get:

$\displaystyle <v^{*},Av>:=v^{*}(Av)$ , so if $\displaystyle <v^{*},Av>=0$ for all $\displaystyle v^{*}\in V^{*}\,,\,v\in V$, then it must be that $\displaystyle Av=0$ for all $\displaystyle v\in V$ (since this is the only possibility for a vector that is mapped by ALL the linear functionals in $\displaystyle V^{*}$ to zero...!), and then $\displaystyle A=0$ and we're done.

Tonio

8. lol, half the battle in maths is deciphering the notation!! This figure does climb to much higher values depending on the lecturer and the subject!!

I'm really sorry but I don't see how that shows it's injective. We know that $\displaystyle \hat{A} : V^* \times V \rightarrow \mathbb{R}$. For this to be injective we need to associate each linear map in $\displaystyle V^*$ and an element in $\displaystyle V$ to one real number (as in each combination of linear map and a vector to a single real number).

However, setting $\displaystyle \hat{A}=0$ and getting $\displaystyle A=0 \ \forall v \in V$ is then a contradiction since you can have two elements in $\displaystyle V^* \times V$ mapped to the same number.

For example, $\displaystyle A \begin{pmatrix} 1 \\ 0 \end{pmatrix}=0$ and $\displaystyle A \begin{pmatrix} 0 \\ 1 \end{pmatrix}=0$ so $\displaystyle \hat{A}$ isn't injective (if we're working over $\displaystyle \mathbb{R}^2$).

I'm pretty sure i've got the wrong end of the stick, and i'd really appreciate if you could tell me what's wrong with how i'm thinking about this.

9. Originally Posted by Showcase_22
lol, half the battle in maths is deciphering the notation!! This figure does climb to much higher values depending on the lecturer and the subject!!

I'm really sorry but I don't see how that shows it's injective. We know that $\displaystyle \hat{A} : V^* \times V \rightarrow \mathbb{R}$. For this to be injective we need to associate each linear map in $\displaystyle V^*$ and an element in $\displaystyle V$ to one real number (as in each combination of linear map and a vector to a single real number).

However, setting $\displaystyle \hat{A}=0$ and getting $\displaystyle A=0 \ \forall v \in V$ is then a contradiction since you can have two elements in $\displaystyle V^* \times V$ mapped to the same number.

For example, $\displaystyle A \begin{pmatrix} 1 \\ 0 \end{pmatrix}=0$ and $\displaystyle A \begin{pmatrix} 0 \\ 1 \end{pmatrix}=0$ so $\displaystyle \hat{A}$ isn't injective (if we're working over $\displaystyle \mathbb{R}^2$).

I'm pretty sure i've got the wrong end of the stick, and i'd really appreciate if you could tell me what's wrong with how i'm thinking about this.

Ok, let us try to make some order here, shall we? First, the operation $\displaystyle \hat{}$ is, as shown at top of page 219, a tensor in $\displaystyle \vartheta_1^1(V)$, and it's thus a map from the cartesian product (in this case, exterior direct prodduct) $\displaystyle V^{*}\times V$ into the definition field $\displaystyle \mathbb{R}$

We want an isomorphism $\displaystyle \Phi:L(V,V)\rightarrow \vartheta^1_1(V)$ , i.e.: we want to associate with every endomorphism of $\displaystyle V$ a unique tensor in $\displaystyle \vartheta^1_1(V)$ in such a way that this association
is 1-1 and onto AND a vector space homomorphism, aka linear transformation...so far so good? Cool...

Since the involved lin. spaces are isomorphic AND of finite dimension, it is enough to define a linear transformation $\displaystyle \Phi$ as above and show that it is 1-1 $\displaystyle \Longleftrightarrow Ker(\Phi)=\{0\}$.

So let us define our map: let $\displaystyle A\in L(V,V)$ be any element, and we define $\displaystyle \Phi(A):=\hat{A}$ , where $\displaystyle \hat{A}$ is the tensor defined by $\displaystyle \hat{A}<v^{*},v>:=<v^{*},Av>$...ok?

I know, they did all this in a rather sloppy and cumbersome way in the book...your bad! You should be studying maths and not all this nonsense.

Anyway...we have now to prove that $\displaystyle Ker(\Phi)=\{0\}\Longleftrightarrow \left(\Phi(A)=0 \Longrightarrow A=0\right)\Longleftrightarrow \left(\hat{A}=0\Longrightarrow A=0\right)$ , and

this is why in the book they assume $\displaystyle \hat{A}=0$ in order to conclude $\displaystyle A=0$ . Now, how did they achieve this?

Well, $\displaystyle \hat{A}=0\Longrightarrow <v^{*},Av>=0\,\,\forall v^{*}\in V^{*}\,\,\forall v\in V$ . But remember that this notation merely means $\displaystyle 0=<v^{*},Av>:=v^{*}(Av)\Longrightarrow$ for any $\displaystyle v\in V$ and

for any $\displaystyle v^{*}\in V^{*}$, the linear functional $\displaystyle v^{*}$ maps the vector $\displaystyle Av$ to zero...Again, this is true FOR ANY VECTOR $\displaystyle v\in V$ , and from here that it MUST BE that $\displaystyle A=0$ ,

i.e. $\displaystyle A$ is the zero linear transformation (or in other words: if for some vector $\displaystyle v\in V$, which a fortiori must be non-zero, we had that $\displaystyle Av=u\neq 0$, then there

exists some $\displaystyle w^{*}\in V^{*}$ s.t. $\displaystyle w^{*}(u)=<w^{*},u>=<w^{*},Av>\neq 0$ , contradicting $\displaystyle <v^{*},Av>=0$ for ALL functionals in $\displaystyle V^{*}$ and ALL vectors in $\displaystyle V$ (Please note

that there is no apriori relation between $\displaystyle v^{*}\,\,\,and\,\,\,v$ : this is just another clumsy, cumbersome and confusing notation these guys use in their book, instead of the

much more clear and non-confusing $\displaystyle \phi\,,\,f$ or something like that to denote elements in $\displaystyle V^{*}$...this is explained in page 203, 8 lines from the bottom.)

Hope the above clears out most of the fog in this...

Tonio

10. Thankyou Tonio!

I was unaware that a linear map between two finite dimensional vector spaces (over the same field) is 1-1 iff the kernel of the map contains only the 0 vector.

I do understand what it was talking about now!

11. Originally Posted by Showcase_22
Thankyou Tonio!

I was unaware that a linear map between two finite dimensional vector spaces (over the same field) is 1-1 iff the kernel of the map contains only the 0 vector.

I do understand what it was talking about now!

In fact this is true even without the finite dimension assumption and over any field whatsoever.

Tonio