1 Attachment(s)

Field Theory - Dummit and Foote - Chapter 13 - Theorem 6

I am trying to understand the proof of Theorem 6 in Chapter 13 of Dummit and Foote.

Theorem 6 states the following: (see attachment)

================================================== ===================================

**Theorem 6.** Let F be a field and let $\displaystyle p(x) \in F[x] $ be an irreducible polynomial. Suppose K is an extension field of F containing a root $\displaystyle \alpha $ of $\displaystyle p(x): \ p( \alpha ) = 0 $. Let $\displaystyle F ( \alpha ) $ denote the subfield of K generated over F by $\displaystyle \alpha $,

Then $\displaystyle F( \alpha ) \cong F[x] / (p(x)) $

================================================== ====================================

The proof then begins as follows:

================================================== ====================================

Proof: There is a natural homomorphism

$\displaystyle \phi : F[x] \longrightarrow F( \alpha ) \subseteq K $

$\displaystyle a(x) \longmapsto a( \alpha ) $

obtained by mapping F to F by the identity map and sending x to $\displaystyle \alpha $ and then extending so that the map is a ring homomorphism ( i.e. the polynomial a(x) in x maps to the polynomial $\displaystyle a( \alpha ) $ in $\displaystyle \alpha $ )

Since $\displaystyle p( \alpha ) = 0 $ by assumption, the element p(x) is in the kernel of $\displaystyle \phi $, so we obtain an induced homomorphism ( also denoted by $\displaystyle \phi $ ):

$\displaystyle \phi : F[x]/(p(x)) \longrightarrow F( \alpha ) $ ... ... ... (1)

etc etc

================================================== ====================================

My problem is in understanding the last sentence above - how exactly is the homomorphism shown in (1) induced by what comes before it - and anyway why it it the same as the natural homomorphism defined earlier (since it is also called $\displaystyle \phi $ I am assuming the two defined homomorphisms are the same).

Also it is subsequently shown that (1) above is an isomorphism - but how can it be a bijection when there are cosets on the left but polynomial evaluations on the right???

Can someone please clarify this situation for me ... perhaps using an example to make the explanation tangible?

Peter

Re: Field Theory - Dummit and Foote - Chapter 13 - Theorem 6

Consider the mapping:

$\displaystyle \phi:\mathbb{R}[x] \to \mathbb{C}$ given by $\displaystyle \phi(f(x)) = \phi(i)$.

What is the image $\displaystyle \phi(x^4 - 1)$? Why would you expect this? What is the maximum possible degree of a "polynomial in $\displaystyle i$"? Does this have anything to do with the minimal polynomial of $\displaystyle i$ (and just what is this minimal polynomial, anyway)? What is the relationship between the maximal degree of such an $\displaystyle i$-polynomial and the dimension of $\displaystyle \mathbb{C}$ as a real vector space?

Express the complex numbers as a quotient ring of the ring of real polynomials. Which coset in the quotient ring is the pre-image of $\displaystyle i$? Explain how the multiplication in the quotient ring corresponds to the "usual" multiplication in $\displaystyle \mathbb{C}$ using the distributive laws of a field, and the fact that $\displaystyle i^2 = -1$.

Re: Field Theory - Dummit and Foote - Chapter 13 - Theorem 6

L Quote:

Originally Posted by

**Deveno** Consider the mapping:

$\displaystyle \phi:\mathbb{R}[x] \to \mathbb{C}$ given by $\displaystyle \phi(f(x)) = \phi(i)$.

What is the image $\displaystyle \phi(x^4 - 1)$? Why would you expect this? What is the maximum possible degree of a "polynomial in $\displaystyle i$"? Does this have anything to do with the minimal polynomial of $\displaystyle i$ (and just what is this minimal polynomial,

anyway)? What is the relationship between the maximal degree of such an $\displaystyle i$-polynomial and the dimension of $\displaystyle \mathbb{C}$ as a real vector space?

Express the complex numbers as a quotient ring of the ring of real polynomials. Which coset in the quotient ring is the pre-image of $\displaystyle i$? Explain how the multiplication in the quotient ring corresponds to the "usual" multiplication in $\displaystyle \mathbb{C}$ using the distributive laws of a field, and the fact that $\displaystyle i^2 = -1$.

Thanks Deveno, most helpful.

Will reflect on and work on your suggested example.

Thanks for helping me to go forward in this topic.

Peter

1 Attachment(s)

Re: Field Theory - Dummit and Foote - Chapter 13 - Theorem 6

Hi Deveno,

Did you mean the mapping:

$\displaystyle \phi : \mathbb{R} \rightarrow \mathbb{C} $ given by $\displaystyle \phi (f(x)) = f(i) $

(I will assume you meant this and there was a typo - but if I am wrong then please inform me since I have misunderstood something)

__Image of $\displaystyle \phi (x^4 - 1) $__

$\displaystyle \phi (x^4 - 1) = \phi (x^2 - 1) (x^2 +1) = (i^2 - 1) (i^2 +1) = (-2) (0) = 0 $

Here we have a reducible polynomial factoring into the minimal polynomial and another reducible polynomial.

Maximal degree of such an i-polynomial and the dimension of $\displaystyle \mathbb{C} $ as a vector space ... ... hmmm.. need more help here ... ...

Complex numbers as a quotient ring of the ring of real polynomials - $\displaystyle \mathbb{R}[x]/<x^2 + 1> $

I did some reading to inform myself regarding your last few questions and found Hungerford - "Abstract Algebra - An Introduction" an instructive text.

Coset of quotient ring that is the pre-image of i is $\displaystyle [x] + <x^2 + 1> $ (see Hungerford - "Abstract Algebra - An Introduction page 131 - **see attached**)

Last part - also see Hungerford - "Abstract Algebra - An Introduction page 131 (**see attached**)

Peter

Re: Field Theory - Dummit and Foote - Chapter 13 - Theorem 6

Yes, that was a typo, my apologies.

If you are going to undertake studying field theory, you need at least a LITTLE linear algebra. I will give a condensed version of some basic facts you need:

A VECTOR SPACE is an abelian group (V,+) along with an action (also called a scalar multiplication) of a field F upon V subject to the following rules:

1) The action is an additive homomorphism:

a(u + v) = au + av, for all a in F, and u,v in V.

2) The action respects both additions:

(a + b)u = au + bu, for all a,b in F, and u in V (note the addition on the left is in F, and the addition on the right is in V)

3)the action restricted to F* is a group action of F* on V (as a set):

a(bu) = (ab)u

1u = u

(we also have 0u = 0, but this can be proven from 0u = (0+0)u = 0u + 0u, showing 0u is the additive identity of V, which is unique).

This should remind you of the standard rules for defining R-modules (a vector space is simply an F-module).

Two key concepts in linear algebra you will need over and over again are:

a) Spanning: a set S = {u_{1},u_{2},....,u_{n}} SPANS a vector space V, if every element of V can be written as an F-linear combination of elements in S, that is: any v in V is of the form:

v = c_{1}u_{1} + c_{2}u_{2} +....+c_{n}u_{n} with the c_{j} in F. That is, S is a generating set for V.

b) F-linear independence: a subset S = {u_{1},u_{2},....,u_{n}} of V is called linearly dependent over F, if:

c_{1}u_{1} + c_{2}u_{2} +....+c_{n}u_{n} = 0, forces c_{1} = c_{2} =..... = c_{n} = 0

(that is none of the elements of S can be written as an F-linear combination of the remaining elements).

If a set S satisfies BOTH (a) and (b), it is called a BASIS for V. While a given vector space can have many DIFFERENT bases, the following is true:

The cardinality of any basis set for a vector space V is invariant, and is defined to be the DIMENSION of V.

For fields, we have the following important result: an extension field E of a base field F is a vector space over F (with scalar multiplication given by the field multiplication of E). The DEGREE of this extension is defined to be: [E:F] = dim_{F}(E).

In this example I just gave, we have: dim_{R}(C) = 2, as we can take a basis for C over R to be the set {1,i} (in other words we have the identification of C with the real vector space R^{2}:

(a,b) ↔ a+bi, a,b in R).

This identification should also remind you of this one:

a+bi ↔ a + bx + (x^{2} + 1) of C with R[x]/(x^{2} + 1).

It turns out that R[x] is ALSO a vector space over R, and that the ideal generated by x^{2}+1 is a SUBSPACE (F-submodule). A subspace U of a vector space V is any set such that:

(U,+) is a subgroup of (V,+)

the field action is well-defined when restricted to U.

This is often characterized by the following 3 criteria:

1) U is non-empty (equivalently, 0 is in U)

2) if u,v are in U, so is u+v

3) if a is any element of F, and u is in U, au is in U

Thus the real line (for example) is a subspace of the real vector space C.

Re: Field Theory - Dummit and Foote - Chapter 13 - Theorem 6

Thanks Deveno

Your post is most helpful as usual.

I will work through the post carefully

I must acknowledge that my knowledge of linear algebra needs improving ... I generally try to work through the interesting material on algebraic structures with minimal study of linear algebra but the approach seems to fail me quite often! I must spend more time on linear algebra.

Peter

Re: Field Theory - Dummit and Foote - Chapter 13 - Theorem 6

Well vector spaces are important for a number of reasons:

1) They are VERY useful for modelling. If a phenomenon we observe is "continual", we might model it with a continuous function. Over short periods of time, a linear function is a good estimate. Linear functions are very well-behaved, and help us understand geometrically what is going on. Many things can be represented as vector spaces: economic transactions, color values, "real space" (like what the Earth and Sun are in), polynomials, weather patterns....it's a long list.

2) Vectors are the natural language for expressing functions of more than one variable.

3) Linear algebra is a basic tool to understanding other algebraic structures, such as fields, k-algebras, and representations. Many of the important theorems for groups (for example) have linear algebra counterparts (which should not be surprising, vector spaces are abelian groups with "extra structure").

4) Linear algebra is an efficient way to track independent data streams simultaneously. Almost every important computer language has built-in functions for vectors and matrices.

5) Linear algebra is one of the most beautiful subjects in all of mathematics. I sometimes like to characterize it thus: "Everything you hope will be true, turns out to be." It's very CLEAN. Not only is the theory itself very elegant, but it is very practical, and if you can do ordinary arithmetic, you can do linear algebra.

6) Linear algebra affords some really neat insights into how two different structures can inform each other: For example, the study of a how a single linear transformation affects a vector space turns out to be intimately related to the study of polynomials: a pair (V,T) of a vector space and a single linear transformation T is essentially the same thing as an F[x]-module V. Not only is this fascinating to study, but resolving a vector space into an eigenbasis, also greatly simplifies calculations (always a plus!). Things like determinants capture the essence of what it means to "calculate area/volume" (something that proves to be rather sophisticated when it comes to even low-dimensional objects like a torus or a hyperboloid).

Re: Field Theory - Dummit and Foote - Chapter 13 - Theorem 6

Thanks Deveno ... The last post is interesting and helpful ...

Yes, I must revise vector spaces ... And indeed review linear algebra in general

Peter