Skip to main content

A polynomial is reducible in R[x] implies it is reducible in (R/I)[x]

Theorem: Let \(I\) be a proper ideal in the integral domain \(R\) and let \(p(x)\) be a non-constant polynomial in \(R[x]\). If \(p(x)\) is reducible in \(R[x]\) then the image of \(p(x)\) in \((R/I)[x]\) is also reducible. But we will show the contrapositive of it

Contrapositive Statement of the Theorem: Let \(I\) be a proper ideal in the integral domain \(R\) and let \(p(x)\) be a non-constant polynomial in \(R[x]\). If the image of \(p(x)\) in \((R/I)[x]\) can not be factored into two polynomials of a smaller degree then \(p(x)\) is irreducible in \(R[x]\)

For that first, we need to prove this theorem

Theorem: Let $I$ be an ideal of the ring $R$ and let $(I)=I[x]$ denote the ideal of $R[x]$ generated by $I$ (the set of polynomials with coefficients in $I$ ). Then $$R[x] /(I) \cong (R / I)[x]$$

Proof: There is a natural map $\varphi: R[x] \to (R / I)[x]$ given by reducing each of the coefficients of a polynomial modulo $I$. The definition of addition and multiplication in these two rings shows that $\varphi$ is a ring homomorphism. The kernel is precisely the set of polynomials each of whose coefficients is an element of $I$, which is to say that $\operatorname{ker} \varphi=I[x]=(I)$, proving the theorem. $\blacksquare$

Now we come back to the proof of the theorem

Proof of Contrapositive Statement of the Theorem: Suppose $p(x)$ cannot be factored in $(R / I)[x]$ but that $p(x)$ is reducible in $R[x]$. This means there are monic, nonconstant polynomials $a(x)$ and $b(x)$ in $R[x]$ such that $p(x)=a(x) b(x)$. By the theorem above, reducing the coefficients modulo $I$ gives a factorization in $(R / I)[x]$ with nonconstant factors, a contradiction. Hence $p(x)$ is irreducible in $R[x]$ $\blacksquare$

Comments

Popular posts from this blog

Hierarchy Theorems (Time, Space, Alternation)

There are many hierarchy theorems based on Time, Space, Nondeterminism, Randomization etc. Time Hierarchy Theorem (Version 1) : If $f, g$ are time-constructible functions satisfying $f(n) \log f(n)=o(g(n))$, then $$ DTIME(f(n)) \subsetneq DTIME (g(n)) $$ Proof:  Consider the following Turing machine $D$: "On input $x$, run for $f(|x|)\log (f(|x|))$ steps the Universal $TM$ $\mathcal{U}$ to simulate the execution of $M_x$ on $x$. If $\mathcal{U}$ outputs some bit $b \in\{0,1\}$ in this time, then output the opposite answer (i.e., output $1-b$ ). Else output 0.” Here $M_x$ is the machine represented by the string $x$.      By definition, $D$ halts within $f(n)\log (f(n))$ steps and hence the language $L$ decided by $D$ is in $DTIME(g(n))$. Claim:  $L \notin$ DTIME $(f(n))$ Proof:  For contradiction's sake, assume that there is some $TM$ $M$ and constant $c$ such that $TM$ $M$, given any input $x \in\{0,1\}^*$, halts within $cf(|x|)$ steps and outputs $D(x)$.      The time to si

Function Automata: A Technique to Solve Finite State Automata Problems

Here I will describe a strategy or technique to solve some finite state automata problems. This strategy was first taught to us by my Theory of Computation Professor. The term 'Function Automata' is given to me since we use function-like structures in this.     Suppose you are given a language that is a regular set \(L\). So there exists a \(DFA\) for \(L\), \(L_D=(Q,\Sigma,\delta, q_1, F)\) where \(Q\) is the set of states, \(\Sigma\) is the set of alphabets, \(\delta\) is the transition function, \(S\) is the starting state and \(f\) is the set of final states. Now let \(Q=\{q_1,q_2,\dots,q_n\) where \(n\in\mathbb{N}\). Now we will do a kind of subset construction but every new state will have \(n\) states of \(Q\) but they can repeat. So $$(\underbrace{q_1,q_1,\dots,q_1}_{n\text{ times}})$$ is a state of the new automata. Now, what is the meaning of these new states? Let \(f=(q_{k_1},q_{k_2},\dots,q_{k_n})\) be a new state where \(q_{k_i}\)'s are not necessary to be diff

Cook - Levin Theorem

Around 1971, Cook and Levin independently discovered the notion of \(NP-completeness\) and gave examples of combinatorial \(NP-complete\) problems whose definition seems to have nothing to do with Turing machines. Soon after, Karp showed that \(NP-completeness\) occurs widely and many problems of practical interest are \(NP-complete\). To date, thousands of computational problems in a variety of disciplines have been shown to be \(NP-complete\). \(SAT\) is the language of all satisfiable \(CNF\) formulae. Cook-Levin Theorem : \(SAT\) is \(NP-complete\) Before we go into the proof first we need to define some things Configuration of a Turing Machine: A configuration of a Turing Machine at any point of time \(t\) is basically the snapshot of the turning machine at that time. It is an ordered pair of the head position, current state, and tape content. Any configuration is initialized and ended with a special symbol, let \(\#\) mark its ending. Then the contents of the tape at the left of