# Thread: show that a quotient ring is in integral domain

1. ## show that a quotient ring is in integral domain

Let A be an integral domain, and a,b are elements in A. Let B=A[x]/(ax+b), where (ax+b) is ideal generated by ax+b. Suppose $\displaystyle (a)\cap(b)=(ab)$, show that B is an integral domain.

I think it suffices to show that (ax+b) is prime under the assumption, and hence need to show that if f(x)g(x) is in (ax+b), then either f(x) or g(x) is in (ax+b).

thank you for help!

2. Originally Posted by frankmelody
Let A be an integral domain, and a,b are elements in A. Let B=A[x]/(ax+b), where (ax+b) is ideal generated by ax+b. Suppose $\displaystyle (a)\cap(b)=(ab)$, show that B is an integral domain.

I think it suffices to show that (ax+b) is prime under the assumption, and hence need to show that if f(x)g(x) is in (ax+b), then either f(x) or g(x) is in (ax+b).

thank you for help!
first of all we also need to assume that $\displaystyle a \neq 0, \ b \neq 0.$ i've been trying for a couple of hours now to find a simpler proof for this problem but i haven't been able to do so! so, i'm just going to give

you my "not very simple" proof. we need a lemma:

Lemma: suppose $\displaystyle d \mid a^n c$ and $\displaystyle d \mid b^n c,$ for some $\displaystyle c,d \in A$ and integer $\displaystyle n \geq 0.$ then $\displaystyle d \mid c.$

Proof: by induction on $\displaystyle n.$ if $\displaystyle n=0$ or $\displaystyle c=0,$ there is nothing to prove. so suppose the claim is true for $\displaystyle n-1$ and $\displaystyle d \mid a^nc, \ d \mid b^n c,$ where $\displaystyle c \neq 0.$ so we have $\displaystyle a^n c = rd, \ b^n c = sd,$ for some $\displaystyle r,s \in A,$

which gives us $\displaystyle a^ns = b^nr \in <a> \cap <b> = <ab>.$ hence $\displaystyle a^ns=abu, \ b^nr=abv,$ for some $\displaystyle u,v \in A.$ therefore $\displaystyle a^{n-1}s=bu$ and $\displaystyle b^{n-1}r=av.$ hence by the induction hypothesis $\displaystyle b \mid s$ and $\displaystyle a \mid r.$

so $\displaystyle s = bt, \ r = az,$ for some $\displaystyle t,z \in A.$ but then $\displaystyle a^n c = rd = azd, \ b^n c = sd = btd,$ and so $\displaystyle a^{n-1}c = zd, \ b^{n-1}c = td,$ and we're done by the induction hypothesis. $\displaystyle \Box$

now suppose $\displaystyle f(x)g(x) \in <ax + b>,$ for some $\displaystyle f(x),g(x) \in A[x].$ we want to prove that either $\displaystyle f(x) \in <ax + b>$ or $\displaystyle g(x) \in <ax+b>.$ obviously we may assume that both $\displaystyle f(x),g(x)$ are non-

zero. let $\displaystyle Q$ be the field of fractions of $\displaystyle A.$ clearly in $\displaystyle Q$ we have $\displaystyle f(-b/a)g(-b/a)=0.$ so we may assume that $\displaystyle f(-b/a)=0.$ so there exists $\displaystyle h(x) \in Q[x]$ such that $\displaystyle f(x)=(ax+b)h(x),$ which we

will call it (1). let $\displaystyle f(x)=c_0x^n + \cdots + c_n$ and $\displaystyle h(x)=\frac{d_0}{d}x^{n-1} + \cdots + \frac{d_{n-1}}{d},$ where $\displaystyle c_j, d_j , d \in A$ and $\displaystyle d \neq 0.$ so (1) becomes: $\displaystyle dc_0x^n + \cdots + dc_n = (ax+b)(d_0x^{n-1} + \cdots + d_{n-1}).$ call this (2).

the claim is that $\displaystyle d \mid d_j,$ for all $\displaystyle j,$ which will complete the solution because then $\displaystyle h(x) \in A[x]$ and thus by (1): $\displaystyle f(x) \in <ax+b>.$ here's the proof of this claim:

from (2) we have: $\displaystyle dc_0=ad_0, \ dc_n = bd_{n-1}, \ dc_j = ad_j + b d_{j-1}, \ 1 \leq j \leq n-1.$ call this one (3). now, ignoring $\displaystyle dc_n = bd_{n-1}$ in (3), the rest of the relations can be written as $\displaystyle CX=dY,$ where:

$\displaystyle C$ is an $\displaystyle n \times n$ lower triangular matrix with $\displaystyle a$ on the main diagonal, $\displaystyle X$ an $\displaystyle n \times 1$ vector with the entries $\displaystyle d_0, \cdots , d_{n-1},$ and $\displaystyle Y$ an $\displaystyle n \times 1$ vector with the entries $\displaystyle c_0, \cdots , c_{n-1}.$ multiplying $\displaystyle CX=dY$

from the left by $\displaystyle \text{adj}(C)$ gives us $\displaystyle a^n X = \det(C) X = d \cdot \text{adj}(C)Y$ and so $\displaystyle d \mid a^n d_j,$ for all $\displaystyle j.$ similarly if we ignore $\displaystyle dc_0=ad_0$ in (3) and write the remaining relations in terms of matrices, we will get

$\displaystyle d \mid b^n d_j,$ for all $\displaystyle j.$ therefore $\displaystyle d \mid d_j,$ for all $\displaystyle j,$ by the lemma. $\displaystyle \Box$

3. hi, thank you for your help! I think most of your solution is correct, only the last part of your proof has some problems. Since A is only an integral domain, adj(C) need not to exist in A. But still, I can get d|d_j under your idea, just by considering each equation one by one. I can solve the problem now, thank you so much!

4. Originally Posted by frankmelody

... Since A is only an integral domain, adj(C) need not to exist in A.
that is not true! actually for the adjoint matrix of a square matrix C over a ring A to exist, we even don't need A to be integral domain!! we only need A to be commutative with identity.

i suggest you take a look at a standard textbook in linear algebra or just google it!

5. oh~yep! sorry for my mistake! we do not need division in the denifition of the adjoint matrix, thank you!