Results 1 to 11 of 11

Math Help - Injectivity of the scalar product

  1. #1
    Super Member Showcase_22's Avatar
    Joined
    Sep 2006
    From
    The raggedy edge.
    Posts
    782

    Injectivity of the scalar product

    I want to show that \hat{A}: V^* \times V \rightarrow \mathbb{R} (where V is a vector space and V^* is it's dual space) defined by:

    \hat{A}(v^*,v) \equiv <v^*, Av>

    is injective.

    I have a proof from a book, but I don't quite understand what it's doing. It starts by:

    Let \hat{A}=0. Then <v^*,Av>=0 \ \forall v^* \in V^*, \ v \in V.

    Now since the scalar product is definite, this implies Av=0 \ \forall v \in V and thus A=0.
    Haven't they just proven it for just 0?

    Also, what does "scalar product is definite" mean?
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Banned
    Joined
    Oct 2009
    Posts
    4,261
    Thanks
    2
    Quote Originally Posted by Showcase_22 View Post
    I want to show that \hat{A}: V^* \times V \rightarrow \mathbb{R} (where V is a vector space and V^* is it's dual space) defined by:

    \hat{A}(v^*,v) \equiv <v^*, Av>

    is injective.

    What is A here? An invertible operator (matrix), perhaps? Otherwise the redaction of the lemma/proposition/theorem is incomplete.

    Tonio


    I have a proof from a book, but I don't quite understand what it's doing. It starts by:



    Haven't they just proven it for just 0?

    Also, what does "scalar product is definite" mean?
    .
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Super Member Showcase_22's Avatar
    Joined
    Sep 2006
    From
    The raggedy edge.
    Posts
    782
    Oh sorry yes!

    A is an endomorphism of V.

    It probably would have helped if I posted that earlier =S
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor

    Joined
    May 2008
    Posts
    2,295
    Thanks
    7
    i don't think the question is what you gave us. my guess is that the question is to prove that the map \varphi : \text{End}_{\mathbb{R}}V \longrightarrow (V^* \times V)^* defined by \varphi (A)=\hat{A} is injective.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Banned
    Joined
    Oct 2009
    Posts
    4,261
    Thanks
    2
    Quote Originally Posted by NonCommAlg View Post
    i don't think the question is what you gave us. my guess is that the question is to prove that the map \varphi : \text{End}_{\mathbb{R}}V \longrightarrow (V^* \times V)^* defined by \varphi (A)=\hat{A} is injective.


    I agree: this seems to be way sounder. Anyway, to the OP: can we know what book did you take this question from?

    Tonio
    Follow Math Help Forum on Facebook and Google+

  6. #6
    Super Member Showcase_22's Avatar
    Joined
    Sep 2006
    From
    The raggedy edge.
    Posts
    782
    Wow! I may mean that, but I have never seen that notation before!

    The book is "Introduction to Vectors and Tensors: Volume 1". The authors are Ray M Bowen and C C Wang.

    I'm trying to show that an isomorphism exists between \vartheta_1^1(V) (the set of tensors of order (1,1)) and L(V;V) (the set of linear maps from V to V).

    The first part works quite well. We know that \dim \vartheta_1^1(V)=N^2 where \dim V = N.

    Now we have to show that \dim L(V;V)=N^2. To do this let e_1, \ldots , e_N be a basis for V. Define N^2 linear transformations A^k_{\apha} :V \rightarrow U by

    A^k_{\alpha}e_k=e_{\alpha} for k, \alpha=1, \ldots , N
    A^k_{\alpha} e_p=0 for k \neq p

    If A is an arbitrary member of L(V;V) then Ae_k \in V so

    Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha} where k=1, \ldots ,N.

    But we also know that A^k_{\alpha}e_k=e_{\alpha}. This gives that:


    Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}=\sum_{\alpha=1}^N A^{\alpha}_k A^k_{\alpha}e_k=\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha}e_k

    We can rearrange this to get:

    \left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)e_k \right)=0 valid \forall e_k \in \{e_1, \ldots ,e_N \}

    But we know that A is a linear map so the above statement is valid for any v \in V. ie.

    \left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)v =0 valid \forall v \in V

    This implies that A= \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha} meaning that the N^2 linear transformations we defined at the start generate L(V;V).

    Now we have to prove that these transformations are linearly independent. We do this by setting

    \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}=0.

    From here we can use the N^2 linear transformations from before to get:

    \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}(e_p)=\sum_{\alpha=1}^MA^{\alpha}_p e_{\alpha}=0.

    But since e_{\alpha} are basis vectors we know that e_{\alpha} \neq 0. Therefore A^{\alpha}_p=0 where p, alpha = 1, \ldots ,M.

    Therefore the set of A^s_{\alpha} is a basis of L(V;V).

    This gives that \dim L(V;V) =\left( \dim V \right)^2=N^2.

    There is also a theorem in the book stating that if two vector spaces have the same dimension then an isomorphism exists between the two. In this case \dim L(V;V)= \dim \vartheta_1^1(V)=N^2 so an isomorphism exists by the theorem.

    Finally, I actually want to find an example of an isomorphism between L(V;V) and \vartheta_1^1(V). The book then defines the function <br /> <br />
\hat{A}: V^* \times V \rightarrow \mathbb{R}<br />
where <br /> <br />
\hat{A}(v^*,v) \equiv <v^*, Av><br />
and A is an endomorphism of V.

    Since the two vector spaces of the same dimensions, by the pigeonhole principle we have to show that \hat{A} is injective (thus implying that it's a bijection).

    The book does this by setting \hat{A}=0 \Rightarrow <v^*,Av>=0 \ \forall v^* \in V^* and v \in V (I thought this was confusing, how does this prove that \hat{A} is injective?)

    Since the scalar product is definite (?) this gives Av=0 \ \forall v \in V and thus A=0.

    Consequently the operation "hat" is an isomorphism.

    The last part is what i'm particularly confused about. I don't see how setting it to 0 will show that it's injective.

    Sorry for the lengthy post, I figured it would be better if I showed exactly what i've done so far!
    Follow Math Help Forum on Facebook and Google+

  7. #7
    Banned
    Joined
    Oct 2009
    Posts
    4,261
    Thanks
    2
    Quote Originally Posted by Showcase_22 View Post
    Wow! I may mean that, but I have never seen that notation before!

    The book is "Introduction to Vectors and Tensors: Volume 1". The authors are Ray M Bowen and C C Wang.

    I'm trying to show that an isomorphism exists between \vartheta_1^1(V) (the set of tensors of order (1,1)) and L(V;V) (the set of linear maps from V to V).

    The first part works quite well. We know that \dim \vartheta_1^1(V)=N^2 where \dim V = N.

    Now we have to show that \dim L(V;V)=N^2. To do this let e_1, \ldots , e_N be a basis for V. Define N^2 linear transformations A^k_{\apha} :V \rightarrow U by

    A^k_{\alpha}e_k=e_{\alpha} for k, \alpha=1, \ldots , N
    A^k_{\alpha} e_p=0 for k \neq p

    If A is an arbitrary member of L(V;V) then Ae_k \in V so

    Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha} where k=1, \ldots ,N.

    But we also know that A^k_{\alpha}e_k=e_{\alpha}. This gives that:


    Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}=\sum_{\alpha=1}^N A^{\alpha}_k A^k_{\alpha}e_k=\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha}e_k

    We can rearrange this to get:

    \left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)e_k=0 valid \forall e_k \in \{e_1, \ldots ,e_N \}

    But we know that A is a linear map so the above statement is valid for any v \in V. ie.

    \left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)v =0 valid \forall v \in V

    This implies that A= \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha} meaning that the N^2 linear transformations we defined at the start generate L(V;V).

    Now we have to prove that these transformations are linearly independent. We do this by setting

    \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}=0.

    From here we can use the N^2 linear transformations from before to get:

    \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}(e_p)=\sum_{\alpha=1}^MA^{\alpha}_p e_{\alpha}=0.

    But since e_{\alpha} are basis vectors we know that e_{\alpha} \neq 0. Therefore A^{\alpha}_p=0 where p, alpha = 1, \ldots ,M.

    Therefore the set of A^s_{\alpha} is a basis of L(V;V).

    This gives that \dim L(V;V) =\left( \dim V \right)^2=N^2.


    I find it weird that you went through all this trouble to prove a very elementary fact from linear algebra which, I presume, must be assumed when you reach the necessary level to mess with tensors...


    There is also a theorem in the book stating that if two vector spaces have the same dimension then an isomorphism exists between the two. In this case \dim L(V;V)= \dim \vartheta_1^1(V)=N^2 so an isomorphism exists by the theorem.


    "...two vector spaces OVER THE SAME FIELD..."

    Finally, I actually want to find an example of an isomorphism between L(V;V) and \vartheta_1^1(V). The book then defines the function <br /> <br />
\hat{A}: V^* \times V \rightarrow \mathbb{R}<br />
where <br /> <br />
\hat{A}(v^*,v) \equiv <v^*, Av><br />
and A is an endomorphism of V.

    Since the two vector spaces of the same dimensions, by the pigeonhole principle we have to show that \hat{A} is injective (thus implying that it's a bijection).

    The book does this by setting \hat{A}=0 \Rightarrow <v^*,Av>=0 \ \forall v^* \in V^* and v \in V (I thought this was confusing, how does this prove that \hat{A} is injective?)

    Since the scalar product is definite (?) this gives Av=0 \ \forall v \in V and thus A=0.

    Consequently the operation "hat" is an isomorphism.

    The last part is what i'm particularly confused about. I don't see how setting it to 0 will show that it's injective.

    Sorry for the lengthy post, I figured it would be better if I showed exactly what i've done so far!

    Wow! Well, I have the book by Bowen and oh my god! Nothing as physicists, engineers and "mathematics scientists"(??!?) to mess up big time with notation and make cumbersome and horrible VERY SIMPLE and beautiful stuff...reminded me why I chose, I choose and I will choose mathematics over physics/engineering/whatever forever!

    In page 213 they actually defined <v^{*},u>:=v^{*}(u)= the action of the map v^{*}\in V^{*} on the vector u\in V. This is a rather standard notation, and then we get:

    <v^{*},Av>:=v^{*}(Av) , so if <v^{*},Av>=0 for all v^{*}\in V^{*}\,,\,v\in V, then it must be that Av=0 for all v\in V (since this is the only possibility for a vector that is mapped by ALL the linear functionals in V^{*} to zero...!), and then A=0 and we're done.

    Tonio
    Follow Math Help Forum on Facebook and Google+

  8. #8
    Super Member Showcase_22's Avatar
    Joined
    Sep 2006
    From
    The raggedy edge.
    Posts
    782
    lol, half the battle in maths is deciphering the notation!! This figure does climb to much higher values depending on the lecturer and the subject!!

    I'm really sorry but I don't see how that shows it's injective. We know that \hat{A} : V^* \times V \rightarrow \mathbb{R}. For this to be injective we need to associate each linear map in V^* and an element in V to one real number (as in each combination of linear map and a vector to a single real number).

    However, setting \hat{A}=0 and getting A=0 \ \forall v \in V is then a contradiction since you can have two elements in V^* \times V mapped to the same number.

    For example, A \begin{pmatrix} 1 \\ 0 \end{pmatrix}=0 and A \begin{pmatrix} 0 \\ 1 \end{pmatrix}=0 so \hat{A} isn't injective (if we're working over \mathbb{R}^2).

    I'm pretty sure i've got the wrong end of the stick, and i'd really appreciate if you could tell me what's wrong with how i'm thinking about this.
    Follow Math Help Forum on Facebook and Google+

  9. #9
    Banned
    Joined
    Oct 2009
    Posts
    4,261
    Thanks
    2
    Quote Originally Posted by Showcase_22 View Post
    lol, half the battle in maths is deciphering the notation!! This figure does climb to much higher values depending on the lecturer and the subject!!

    I'm really sorry but I don't see how that shows it's injective. We know that \hat{A} : V^* \times V \rightarrow \mathbb{R}. For this to be injective we need to associate each linear map in V^* and an element in V to one real number (as in each combination of linear map and a vector to a single real number).

    However, setting \hat{A}=0 and getting A=0 \ \forall v \in V is then a contradiction since you can have two elements in V^* \times V mapped to the same number.

    For example, A \begin{pmatrix} 1 \\ 0 \end{pmatrix}=0 and A \begin{pmatrix} 0 \\ 1 \end{pmatrix}=0 so \hat{A} isn't injective (if we're working over \mathbb{R}^2).

    I'm pretty sure i've got the wrong end of the stick, and i'd really appreciate if you could tell me what's wrong with how i'm thinking about this.

    Ok, let us try to make some order here, shall we? First, the operation \hat{} is, as shown at top of page 219, a tensor in \vartheta_1^1(V), and it's thus a map from the cartesian product (in this case, exterior direct prodduct) V^{*}\times V into the definition field \mathbb{R}

    We want an isomorphism \Phi:L(V,V)\rightarrow \vartheta^1_1(V) , i.e.: we want to associate with every endomorphism of V a unique tensor in \vartheta^1_1(V) in such a way that this association
    is 1-1 and onto AND a vector space homomorphism, aka linear transformation...so far so good? Cool...

    Since the involved lin. spaces are isomorphic AND of finite dimension, it is enough to define a linear transformation \Phi as above and show that it is 1-1 \Longleftrightarrow Ker(\Phi)=\{0\}.

    So let us define our map: let A\in L(V,V) be any element, and we define \Phi(A):=\hat{A} , where \hat{A} is the tensor defined by \hat{A}<v^{*},v>:=<v^{*},Av>...ok?

    I know, they did all this in a rather sloppy and cumbersome way in the book...your bad! You should be studying maths and not all this nonsense.


    Anyway...we have now to prove that Ker(\Phi)=\{0\}\Longleftrightarrow \left(\Phi(A)=0 \Longrightarrow A=0\right)\Longleftrightarrow \left(\hat{A}=0\Longrightarrow A=0\right) , and

    this is why in the book they assume \hat{A}=0 in order to conclude A=0 . Now, how did they achieve this?

    Well, \hat{A}=0\Longrightarrow <v^{*},Av>=0\,\,\forall v^{*}\in V^{*}\,\,\forall v\in V . But remember that this notation merely means 0=<v^{*},Av>:=v^{*}(Av)\Longrightarrow for any v\in V and

    for any v^{*}\in V^{*}, the linear functional v^{*} maps the vector Av to zero...Again, this is true FOR ANY VECTOR v\in V , and from here that it MUST BE that A=0 ,

    i.e. A is the zero linear transformation (or in other words: if for some vector v\in V, which a fortiori must be non-zero, we had that Av=u\neq 0, then there

    exists some w^{*}\in V^{*} s.t. w^{*}(u)=<w^{*},u>=<w^{*},Av>\neq 0 , contradicting <v^{*},Av>=0 for ALL functionals in V^{*} and ALL vectors in V (Please note

    that there is no apriori relation between v^{*}\,\,\,and\,\,\,v : this is just another clumsy, cumbersome and confusing notation these guys use in their book, instead of the

    much more clear and non-confusing \phi\,,\,f or something like that to denote elements in V^{*}...this is explained in page 203, 8 lines from the bottom.)

    Hope the above clears out most of the fog in this...

    Tonio
    Follow Math Help Forum on Facebook and Google+

  10. #10
    Super Member Showcase_22's Avatar
    Joined
    Sep 2006
    From
    The raggedy edge.
    Posts
    782
    Thankyou Tonio!

    I was unaware that a linear map between two finite dimensional vector spaces (over the same field) is 1-1 iff the kernel of the map contains only the 0 vector.

    I do understand what it was talking about now!
    Follow Math Help Forum on Facebook and Google+

  11. #11
    Banned
    Joined
    Oct 2009
    Posts
    4,261
    Thanks
    2
    Quote Originally Posted by Showcase_22 View Post
    Thankyou Tonio!

    I was unaware that a linear map between two finite dimensional vector spaces (over the same field) is 1-1 iff the kernel of the map contains only the 0 vector.

    I do understand what it was talking about now!


    In fact this is true even without the finite dimension assumption and over any field whatsoever.

    Tonio
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Replies: 6
    Last Post: September 7th 2010, 10:03 PM
  2. Scalar Product
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: March 3rd 2010, 02:06 PM
  3. multivariable differential for inner product(scalar product)?
    Posted in the Differential Geometry Forum
    Replies: 2
    Last Post: October 23rd 2009, 06:40 PM
  4. scalar product
    Posted in the Math Topics Forum
    Replies: 3
    Last Post: December 25th 2008, 01:22 PM
  5. Dot and Scalar Product
    Posted in the Advanced Applied Math Forum
    Replies: 8
    Last Post: September 9th 2008, 09:03 PM

Search Tags


/mathhelpforum @mathhelpforum