When I have to prove that the subset {(3-i, 2+2i,4),(2,2+4i,3),(1-i,-2i,-1)} is a basis of the complex vector space C^{3}, Am I suppose to find an inverse (to obtain the unique solution)?
Thanks in advance for any help you are able to provide.
When I have to prove that the subset {(3-i, 2+2i,4),(2,2+4i,3),(1-i,-2i,-1)} is a basis of the complex vector space C^{3}, Am I suppose to find an inverse (to obtain the unique solution)?
Thanks in advance for any help you are able to provide.
Hey jojo7777777.
Consider that C as a vector space is isomorphic to R^2 and then use the techniques for R^n to get a basis (Hint: you will have a six dimensional space in R^6 with six vectors in your matrix to row-reduce).
My idea was that:
a,b,c∈ℂ to determine d,g,f∈ℂ such that
(a,b,c)=d(3-i,2+2i,4)+g(2,2+4i,3)+f(1-i,-2i,-1) I need to solve for d,g,f to obtain the unique solution, so to show that the given set is a basis.
But how to solve it for d,g,f?
well if they are linearly independent, then they span a subspace of C^{3} of complex dimension 3, which must be C^{3} itself since:
dim_{C}(C^{3}) = 3.
so use the definition of linear independence:
suppose that a(3-i,2+2i,4) + b(2,2+4i,3) +c(1-i,-2i,-1) = 0 = (0,0,0) = (0+0i,0+0i,0+0i).
we get 3 equations in 3 unknowns (the complex numbers a,b and c):
(3-i)a + 2b + (1-i)c = 0
(2+2i)a + (2+4i)b - (2i)c = 0
4a + 3b - c = 0
this system of equations has a UNIQUE solution (namely : (a,b,c) = (0,0,0)) if and only if the matrix:
is invertible (non-singular), which you may determine by row-reduction and/or finding the determinant.
i find the determinant to be:
(3-i)(2+4i)(-1) + (2)(-2i)(4) + (1-i)(2+2i)(3) - (1-i)(2+4i)(4) - (-2i)(3)(3-i) - (-1)(2)(2+2i) =
-10-10i -16i + 12 - (24+8i) - (-6-18i) - (-4-4i) =
-10-10i - 16i + 12 + -24-8i + 6+18i + 4+4i = -12-12i ≠ 0, which settles the matter.
ie, calculate the determinant. See post 4.
Edit: I believe the original question assumed a knowledge of basis, linear independence, and linear independence of rows and columns of matrices for real F. The point was that they apply to a vector space over a field F, whether F is real or complex. If F is real, rows (or columns) of a matrix are linearly independent if the determinant of the matrix is unequal to zero. The same applies if F is complex, and that's the point.
i agree completely. people often get "so used" to doing linear algebra over the reals that the fact that vector spaces can be defined over ANY field (such as Q, or C, or the field of real rational functions in x, or a finite galois field) gets lost in the mix.
in fact, a lot of linear algebra transfers fairly well to integral domains, but some solutions lie outside of the domain (a system of linear equations with integer coefficients may have fractional solutions, for example).
i did the calculations explicitly, because i wanted to show we had a 3x3 determinant to evaluate, and not a 6x6 one (as suggested in an earlier post, which would have been horrendous to actually calculate. to be fair, that post recommended row-reduction which i would probably prefer in a 6-dimensional situation).
I have two questions:
1. Does the row-reduction (when vector space over the complex numbers) obey the "usual rules"- for example the first row will be divided by 3-i?
2. Does integral domain exclude fractional solutions, isn't it about absence of zero divisors?in fact, a lot of linear algebra transfers fairly well to integral domains, but some solutions lie outside of the domain (a system of linear equations with integer coefficients may have fractional solutions, for example).
1) yes
2) Integral domain has fractional solutions: ab=c has a sol for b if a unequal 0.
An integral domain is a commutative ring with a cancellation law ca=cb → a=b, which requires no divisors of zero:
ca=cb → ca-cb=0 → c(a-b)=0 → c = 0 or (a-b)=0. If c ≠ 0, a=b
a & b are divisors of zero if a ≠ 0 and b ≠ 0 and ab = 0.
Example of divisors of zero (Perlis):
Let A & B be 2X2 matrices with rows:
A = (-2,1) & (2,-1)
B = (2,3) & (4,6)
Then A ≠ 0 and B ≠ 0 but AB=0