Let F be a field.
Let f(x) be a polynomial in F[x].
Prove that F[x]/(f(x)) is a field if and only if f(x) is irreducible.
Would appreciate help with this problem.
Peter
show if f(x) is irreducible, (f(x)) is a maximal ideal.
since every polynomial ring over a field F is a euclidean domain, it is a principal ideal domain.
every euclidean domain is principal:
suppose R is euclidean, and I a non-zero ideal of R. let u in R be such that d(u) is minimal.
given ANY x in I, we can write x = qu + r, where either d(r) < d(u), or r = 0.
now r = x - qu, which is in I, since x is in I, and u is in I, so qu is in I, so x - qu is in I, that is: r is in I.
since d(u) is minimal, we must have r = 0. thus x = qu, which is to say x is in (u), so I is contained in (u).
but clearly (u) is contained in I (since u is IN I). thus I = (u), that is: any ideal is principal.
so in F[x], any ideal is of the form I = (f(x)) for some polynomial f(x).
so suppose I ⊆ J in F(x). since I = (f(x)) and J = (g(x)), this means that f(x) = r(x)g(x), for some r(x) in F[x], that is: g(x)|f(x).
if f(x) is irreducible, r(x) must be a unit, or g(x) must be a unit.
if r(x) is a unit, (f(x)) = (g(x)), so I = J.
if g(x) is a unit, J = F[x], thus I is maximal.
if I is a maximal ideal of R (a commutative ring with unity), R/I is a field:
suppose K is an ideal of R/I. since we have the surjective ring-homomorphism φ: R-->R/I, it follows that φ^{-1}(K) is an ideal of R containing I. since I is maximal, either φ^{-1}(K) = I, or φ^{-1}(K) = R.
in the first case: K = φ(φ^{-1}(K)) = φ(I) = {0}.
in the second case, K = φ(φ^{-1}(K)) = φ(R) = R/I. the fact that R/I is a field comes from:
if R is a commutative ring with unity whose only ideals are {0} and R, then R is a field:
suppose x in R is non-zero. then (x) is an ideal of R. since (x) ≠ {0}, (x) = R, thus 1 is in (x), hence 1 = xu, for some u in R, so that x is a unit.
********
on the other hand, suppose F[x]/(f(x)) is a field. then (f(x)) must be a maximal ideal of F[x] (for if not, F[x]/(f(x)) would contain other ideals besides itself and {0}, which is impossible, since in a field, every non-zero element is a unit).
so suppose we have f(x) = g(x)k(x), for some g(x), k(x) in F[x]. then g(x)|f(x). hence (f(x)) ⊆ (g(x)). since f(x) is a maximal ideal of F[x], either:
a) (g(x)) = F[x], in which case 1 is in (g(x)), so 1 = g(x)h(x), for some h(x) in F[x], that is: g(x) is a unit.
b) (g(x)) = (f(x)) in which case g(x) = f(x)h(x), so we get:
f(x) = g(x)k(x) = f(x)h(x)k(x).
since F[x] is an integral domain (every principal ideal domain is), we have cancellation, so 1 = h(x)k(x), so that k(x) is a unit.
so if f(x) = g(x)k(x), one of g(x) or k(x) must be a unit, that is: f is irreducible.
You write: "suppose R is euclidean, and I a non-zero ideal of R. let u in R be such that d(u) is minimal."
I am assuming that you are using the Well Ordering of Z (actually of and just taking the minimum of the d(x)?
Is that correct?
Peter
I have followed your proof down to the point where you state:
"if g(x) is a unit, J = F[x], thus I is maximal."
How do we know this? That is how do we know there is no ideal K such that ?
Can you please clarify how J = F[x] implies that I is maximal.
Peter
a maximal ideal I of an ideal R is one such that:
if J is an ideal with I ⊆ J, either J = I, or J = R (in other words a maximal ideal doesn't have any other ideals properly containing it except R).
we have shown that for any ideal I generated by an irreducible f(x), if J is an ideal containing I:
J = I, or J = R (which is in this case, F[x]).
that is: there aren't any ideals between (f(x)) and the entire ring, F[x].
another way to see this is:
F[x] is a euclidean domain. euclidean domains are also PIDs. PIDs are UFDs. in a UFD, irreducible = prime.
thus (f(x)) is an ideal generated by a prime element. ideals generated by a prime element are prime ideals. in a PID, prime ideals are maximal.
***************
it's not J = F[x], that implies I is maximal, it's
J = F[x] or J = I, that implies I is maximal.
In your proof above you write: "suppose R is euclidean, and I a non-zero ideal of R. let u in R be such that d(u) is minimal."
I am assuming that you are using the Well Ordering of Z and just taking the minimum of the d(x)?
Is that correct?
Peter