Skip to main content

Posts

Hierarchy Theorems (Time, Space, Alternation)

There are many hierarchy theorems based on Time, Space, Nondeterminism, Randomization etc. Time Hierarchy Theorem (Version 1) : If $f, g$ are time-constructible functions satisfying $f(n) \log f(n)=o(g(n))$, then $$ DTIME(f(n)) \subsetneq DTIME (g(n)) $$ Proof:  Consider the following Turing machine $D$: "On input $x$, run for $f(|x|)\log (f(|x|))$ steps the Universal $TM$ $\mathcal{U}$ to simulate the execution of $M_x$ on $x$. If $\mathcal{U}$ outputs some bit $b \in\{0,1\}$ in this time, then output the opposite answer (i.e., output $1-b$ ). Else output 0.” Here $M_x$ is the machine represented by the string $x$.      By definition, $D$ halts within $f(n)\log (f(n))$ steps and hence the language $L$ decided by $D$ is in $DTIME(g(n))$. Claim:  $L \notin$ DTIME $(f(n))$ Proof:  For contradiction's sake, assume that there is some $TM$ $M$ and constant $c$ such that $TM$ $M$, given any input $x \in\{0,1\}^*$, halts within $cf(|x|)$ steps and outputs $D(x)$.      The time to si
Recent posts

Gauss-Lucas Theorem

  Theorem : If all zeros of a polynomial $P(z)$ lie in a half-plane, then all zeros pf the derivative $P'(z)$ lie in the same half-plane. Proof: Let $P(z)$ be any polynomial with degree $n$. Then $$P(z)=a(z-a_1)(z-a_2)\cdots(z-a_n)$$ where $a_1,a_2,\dots,a_n$ are the zeros of $P(z)$. Hence $$\frac{P'(z)}{P(z)}=\frac1{z-a_1}+\frac1{z-a_2}+\cdots+\frac1{z-a_n}$$Suppose the half plane $H$ defined as the part of the plane where $$IM\frac{z-a}{b}<0$$. Suppose $z$ If $a_k$ is in $H$ and $z$ is no, we have then $$Im\frac{z-a_k}{b}=Im\frac{z-a}{b}-Im\frac{a_k-a}{b}>0$$But the imaginary parts of reciprocal numbers have opposite signs. Therefore, under the same assumption, $Im\ b(z-a_k)^{-1}<0$.Now  this is true for all $k$ we conclude that $$Im\frac{bP'(z)}{P(z)}=\sum_{k=1}^{n} Im\frac{b}{z-a_k}<0$$ and consequently $P'(z)\neq 0$. Hence $z$ is not a root of $P'(z)$ concluding that all roots of $P'(z)$ lie in $H$. $\blacksquare$

Universal Turing Machine and Its Simulation

The theorems we have which we will show is  Theorem :  There exists a Turing Machine $\mathcal{U}$ such that for every $x,\alpha\in \{0,1\}^*$ $\mathcal{U}(x,\alpha)=M_{\alpha}(x)$ where $M_{\alpha}$ denotes the turing machine represented by $\alpha$. Moreover, if $M_{\alpha}$ halts on input $x$ within $T$ steps then $\mathcal{U}(x,\alpha)$ halts within $cT\log T$ steps where $C$ is a number of independent of $|x|$ and depending only on $M_{\alpha}'s$ alphabet size, number of tapes and number of states  For this, we need some equivalency between some special Turing machine models Theorem 1:  For every $f:\{0,1\}^*\to \{0,1\}$ and time-constructible $T:\mathbb{N}\to \mathbb{N}$ if $f$ is computable in time $T(n)$ by a Turing Machine $M$ using alphabet $\Gamma$ then it is computable in time $4\log|\Gamma|T(n)$ by a Turing Machine $\hat{M}$ using the alphabet $\{\triangleright, \square,0,1\}$ Theorem 2:  Define a single-tape Turing machine to be a Turing Machine that has only one read

A polynomial is reducible in R[x] implies it is reducible in (R/I)[x]

Theorem:  Let \(I\) be a proper ideal in the integral domain \(R\) and let \(p(x)\) be a non-constant polynomial in \(R[x]\). If \(p(x)\) is reducible in \(R[x]\) then the image of \(p(x)\) in \((R/I)[x]\) is also reducible. But we will show the contrapositive of it Contrapositive Statement of the Theorem:  Let \(I\) be a proper ideal in the integral domain \(R\) and let \(p(x)\) be a non-constant polynomial in \(R[x]\). If the image of \(p(x)\) in \((R/I)[x]\) can not be factored into two polynomials of a smaller degree then \(p(x)\) is irreducible in \(R[x]\) For that first, we need to prove this theorem Theorem:  Let $I$ be an ideal of the ring $R$ and let $(I)=I[x]$ denote the ideal of $R[x]$ generated by $I$ (the set of polynomials with coefficients in $I$ ). Then $$R[x] /(I) \cong (R / I)[x]$$ Proof:  There is a natural map $\varphi: R[x] \to (R / I)[x]$ given by reducing each of the coefficients of a polynomial modulo $I$. The definition of addition and multiplication in these t

Cook - Levin Theorem

Around 1971, Cook and Levin independently discovered the notion of \(NP-completeness\) and gave examples of combinatorial \(NP-complete\) problems whose definition seems to have nothing to do with Turing machines. Soon after, Karp showed that \(NP-completeness\) occurs widely and many problems of practical interest are \(NP-complete\). To date, thousands of computational problems in a variety of disciplines have been shown to be \(NP-complete\). \(SAT\) is the language of all satisfiable \(CNF\) formulae. Cook-Levin Theorem : \(SAT\) is \(NP-complete\) Before we go into the proof first we need to define some things Configuration of a Turing Machine: A configuration of a Turing Machine at any point of time \(t\) is basically the snapshot of the turning machine at that time. It is an ordered pair of the head position, current state, and tape content. Any configuration is initialized and ended with a special symbol, let \(\#\) mark its ending. Then the contents of the tape at the left of

Cauchy Riemann Equation

Let \(f:U\to \mathbb{C}\) function which is differentiable and \(U\) is open in \(\mathbb{C}\). Suppose \(f'(z_0) \) exists where \(z_0=a+ib\in \)  \(U\subset \mathbb{C}\). \(f(z)=u+iv\) where \(u:U\to \mathbb{R}\) and \(v:U\to \mathbb{R}\). First take \(h=t\in \mathbb{R}\).$$f'(z_0)  = \lim_{t\to 0}\frac{f(a+t+ib)-f(a+ib)}{t}$$Breaking it we get $$ \lim_{t\to 0} \frac{u(a+t,b)-u(a,b)}{t}+i\lim_{t\to 0}\frac{v(a+t,b)-v(a,b)}{t}= \left.\frac{\partial u}{\partial x}\right|_{z_0}+i\left. \frac{\partial v}{\partial x}\right|_{z_0} \label{cd1}$$ Now take \(h=it\), \(t\in \mathbb{R}\) $$f'(z_0) = \lim_{t\to 0}\frac{f(a+ib+it)-f(a+ib)}{it}$$ Breaking it down we get $$ \lim_{t\to 0} \frac{u(a,b+t)-u(a,b)}{it}+i\lim_{t\to 0}\frac{v(a,b+t)-v(a,b)}{it}= \left. \frac{\partial v}{\partial y}\right|_{z_0} -i\left. \frac{\partial u}{\partial y}\right|_{z_0}\label{cd2}$$ Equating the two equations we get \(f\) is complex differentiable at \(z_0\) and $$\boxed{\left.\frac{\partial u}{\parti

Function Automata: A Technique to Solve Finite State Automata Problems

Here I will describe a strategy or technique to solve some finite state automata problems. This strategy was first taught to us by my Theory of Computation Professor. The term 'Function Automata' is given to me since we use function-like structures in this.     Suppose you are given a language that is a regular set \(L\). So there exists a \(DFA\) for \(L\), \(L_D=(Q,\Sigma,\delta, q_1, F)\) where \(Q\) is the set of states, \(\Sigma\) is the set of alphabets, \(\delta\) is the transition function, \(S\) is the starting state and \(f\) is the set of final states. Now let \(Q=\{q_1,q_2,\dots,q_n\) where \(n\in\mathbb{N}\). Now we will do a kind of subset construction but every new state will have \(n\) states of \(Q\) but they can repeat. So $$(\underbrace{q_1,q_1,\dots,q_1}_{n\text{ times}})$$ is a state of the new automata. Now, what is the meaning of these new states? Let \(f=(q_{k_1},q_{k_2},\dots,q_{k_n})\) be a new state where \(q_{k_i}\)'s are not necessary to be diff

Equicontinuity on a Compact Metric Set implies Uniform Equicontinuity

Equicontinuity : A collection \(\mathcal{F}\) of real-valued functions on a metric space \(X\) is equicontinuous at the point \(x\in X\) provided for every \(\epsilon>0,\ \exists\) \(\delta>0\) such that \(\forall\ f\in\mathcal{F}\) and \(y\in X\) $$d(x,y)<\delta \implies |f(x)-f(y)|<\epsilon$$The collection \(\mathcal{F}\) is said to be equicontinuous on \(X\) provided it is equicontinuous at every point in \(X\) Uniform Equicontinuity :  A equicontinuous collection \(\mathcal{F}\) of real-valued functions on a metric space \(X\) is uniformly equicontinuous if for every \(\epsilon>0,\ \exists\) \(\delta>0\) such that \(\forall\ f\in\mathcal{F}\) and \(x,y\in X\) $$d(x,y)<\delta \implies |f(x)-f(y)|<\epsilon$$ Just like we know that a continuous function on a compact set is uniformly continuous here we are showing that for a collection of functions the same \(\epsilon-\delta\) pair works. Now coming to the proof, since \(\mathcal{F}\) is equicontinuous for ever

Cesaro Summability implies Abel Summability

 Let \((a_n)\) be a sequence of real numbers. \(\sum\limits_{n=0}^{\infty}a_n\) be a series. Let $$s_n=\sum_{k=0}^{n}a_k$$Then the sequence \((a_n)\) is Cesaro Summable  with Cesaro Sum \(s\in \mathbb{R}\) if $$\lim_{n\to\infty}\frac{\sigma_n}{n+1}=\lim_{n\to\infty}\frac{1}{n+1}\sum_{k=0}^n s_n=s$$Let $$f(x)=\sum\limits_{k=0}^{\infty}a_nx^n$$ be power series. Then the sequence is Abel Summable  if the power series \(f(x)\) converges with a radius of convergence \(|x|<1\).  You can see that if \(s_n\to L\) as \(n\to \infty\) then \(\sigma_n\to s\) as \(n\to \infty\) i.e. convergence of the series implies Cesaro Summability We will prove Abel Summability is much stronger than Cesaro summability i.e. if a series is Cesaro Summable then it is Abel Summable.  So assume \(a_n\) is Cesaro summable. Hence $$\lim_{n\to\infty}\frac{\sigma_n}{n+1}=\lim_{n\to\infty}\frac{1}{n+1}\sum_{k=0}^n s_n=L$$Hence the sequence \(\left( \frac{\sigma_n}{n+1}\right)\) is Abel summable. Therefore $$f(x)=\sum_