Skip to main content

A polynomial is reducible in R[x] implies it is reducible in (R/I)[x]

Theorem: Let \(I\) be a proper ideal in the integral domain \(R\) and let \(p(x)\) be a non-constant polynomial in \(R[x]\). If \(p(x)\) is reducible in \(R[x]\) then the image of \(p(x)\) in \((R/I)[x]\) is also reducible. But we will show the contrapositive of it

Contrapositive Statement of the Theorem: Let \(I\) be a proper ideal in the integral domain \(R\) and let \(p(x)\) be a non-constant polynomial in \(R[x]\). If the image of \(p(x)\) in \((R/I)[x]\) can not be factored into two polynomials of a smaller degree then \(p(x)\) is irreducible in \(R[x]\)

For that first, we need to prove this theorem

Theorem: Let $I$ be an ideal of the ring $R$ and let $(I)=I[x]$ denote the ideal of $R[x]$ generated by $I$ (the set of polynomials with coefficients in $I$ ). Then $$R[x] /(I) \cong (R / I)[x]$$

Proof: There is a natural map $\varphi: R[x] \to (R / I)[x]$ given by reducing each of the coefficients of a polynomial modulo $I$. The definition of addition and multiplication in these two rings shows that $\varphi$ is a ring homomorphism. The kernel is precisely the set of polynomials each of whose coefficients is an element of $I$, which is to say that $\operatorname{ker} \varphi=I[x]=(I)$, proving the theorem. $\blacksquare$

Now we come back to the proof of the theorem

Proof of Contrapositive Statement of the Theorem: Suppose $p(x)$ cannot be factored in $(R / I)[x]$ but that $p(x)$ is reducible in $R[x]$. This means there are monic, nonconstant polynomials $a(x)$ and $b(x)$ in $R[x]$ such that $p(x)=a(x) b(x)$. By the theorem above, reducing the coefficients modulo $I$ gives a factorization in $(R / I)[x]$ with nonconstant factors, a contradiction. Hence $p(x)$ is irreducible in $R[x]$ $\blacksquare$

Comments

Popular posts from this blog

Hierarchy Theorems (Time, Space, Alternation)

There are many hierarchy theorems based on Time, Space, Nondeterminism, Randomization etc. Time Hierarchy Theorem (Version 1) : If $f, g$ are time-constructible functions satisfying $f(n) \log f(n)=o(g(n))$, then $$ DTIME(f(n)) \subsetneq DTIME (g(n)) $$ Proof:  Consider the following Turing machine $D$: "On input $x$, run for $f(|x|)\log (f(|x|))$ steps the Universal $TM$ $\mathcal{U}$ to simulate the execution of $M_x$ on $x$. If $\mathcal{U}$ outputs some bit $b \in\{0,1\}$ in this time, then output the opposite answer (i.e., output $1-b$ ). Else output 0.” Here $M_x$ is the machine represented by the string $x$.      By definition, $D$ halts within $f(n)\log (f(n))$ steps and hence the language $L$ decided by $D$ is in $DTIME(g(n))$. Claim:  $L \notin$ DTIME $(f(n))$ Proof:  For contradiction's sake, assume that there is some $TM$ $M$ and constant $c$ such that $TM$ $M$, given any input $x \in\{0,1\}^*$, halts within $cf(|x|)$ steps and outputs $D(x)$....

Cook - Levin Theorem

Around 1971, Cook and Levin independently discovered the notion of \(NP-completeness\) and gave examples of combinatorial \(NP-complete\) problems whose definition seems to have nothing to do with Turing machines. Soon after, Karp showed that \(NP-completeness\) occurs widely and many problems of practical interest are \(NP-complete\). To date, thousands of computational problems in a variety of disciplines have been shown to be \(NP-complete\). \(SAT\) is the language of all satisfiable \(CNF\) formulae. Cook-Levin Theorem : \(SAT\) is \(NP-complete\) Before we go into the proof first we need to define some things Configuration of a Turing Machine: A configuration of a Turing Machine at any point of time \(t\) is basically the snapshot of the turning machine at that time. It is an ordered pair of the head position, current state, and tape content. Any configuration is initialized and ended with a special symbol, let \(\#\) mark its ending. Then the contents of the tape at the left of ...

Cauchy Riemann Equation

Let \(f:U\to \mathbb{C}\) function which is differentiable and \(U\) is open in \(\mathbb{C}\). Suppose \(f'(z_0) \) exists where \(z_0=a+ib\in \)  \(U\subset \mathbb{C}\). \(f(z)=u+iv\) where \(u:U\to \mathbb{R}\) and \(v:U\to \mathbb{R}\). First take \(h=t\in \mathbb{R}\).$$f'(z_0)  = \lim_{t\to 0}\frac{f(a+t+ib)-f(a+ib)}{t}$$Breaking it we get $$ \lim_{t\to 0} \frac{u(a+t,b)-u(a,b)}{t}+i\lim_{t\to 0}\frac{v(a+t,b)-v(a,b)}{t}= \left.\frac{\partial u}{\partial x}\right|_{z_0}+i\left. \frac{\partial v}{\partial x}\right|_{z_0} \label{cd1}$$ Now take \(h=it\), \(t\in \mathbb{R}\) $$f'(z_0) = \lim_{t\to 0}\frac{f(a+ib+it)-f(a+ib)}{it}$$ Breaking it down we get $$ \lim_{t\to 0} \frac{u(a,b+t)-u(a,b)}{it}+i\lim_{t\to 0}\frac{v(a,b+t)-v(a,b)}{it}= \left. \frac{\partial v}{\partial y}\right|_{z_0} -i\left. \frac{\partial u}{\partial y}\right|_{z_0}\label{cd2}$$ Equating the two equations we get \(f\) is complex differentiable at \(z_0\) and $$\boxed{\left.\frac{\partial u}{\par...