next up previous
Next: Theory of Cellular Automata Up: Computation, Dynamics and the Previous: List of Figures

Subsections

Theory of Computation

Deterministic Finite Automata

Having a simple construction and easy to grasp definition, deterministic finite automata (DFA) are one of the simplest models of computation. When interconnected, discrete finite state automata have the ability to exhibit interesting dynamical behavior. These characteristics make DFA an ideal unit with which to build a system suitable for studying the connection between dynamical behavior and computational ability.

Informal Description

A deterministic finite automaton can be viewed as a simple machine consisting of three components: sensor, memory and instruction book.

The automaton sensor is the device by which data is input. For example the automaton could have a thermometer reading temperature, a magnet reading a hard drive, an antenna reading radio waves, or anything else that would import information. We restrict the accuracy of the imported information by allowing the automaton to recognize only a finite number of different input values or symbols. For example, if a discrete finite state automaton was taking values from a thermometer as input, we could restrict its recognition of temperatures between $ 0^{\circ } $ and $ 100^{\circ } $ in steps of $ 0.1^{\circ } $.

The automaton memory has the limited capacity to store only a single symbol. The type of symbols that can be stored need not be the same as those being returned by the automaton sensor. We refer to this stored symbol as the automaton's state. We allow the automaton to exist in only a finite variety of different states.

The automaton instruction book is the set of rules by which the automaton changes its state. The rulebook dictates what state the automaton should be in based upon the automaton's current state and input symbol. The automaton consults this rulebook everytime an input symbol is read.

An automaton begins operation in a predefined state known as its initial state (or start state). The automaton then enters into this loop:



read input symbol $ \longrightarrow $ consult instruction book $ \longrightarrow $ transition to new state



Because operation continues until the input source has been exhausted, it is often practical to place a finite limit on the amount of information that may be made available to an automaton for input.

That an automaton may operate on a finite set of input data and end in a discernible final state motivates us to view the automaton as a mapping between the set of input symbols and its final state. This view is further enriched by naming some of the automaton's possible states accepting states. This allows us to say that an automaton either accepts or rejects its input data set based upon its final state.

Corollary 1   Let $ A $ be an alphabet, $ s^{n}\in A $ an $ n $-letter string over $ A $. Then a DFA is binary valued function $ M $ taking as input a finite set of input symbols:

$\displaystyle M:s^{n}\longrightarrow \{0,1\}$

Automata as binary valued mappings highlights their canonical use: to determine whether or not a string of symbols meets some particular criteria. The type of criteria that a finite automaton may check for is called the automaton's language1. This language is limited by the automaton's single symbol memory.

Formal Definition

We use J. C. Martin's [9] notation as our basis for

Definition 1   A finite automaton is a 5-tuple $ (Q,\Sigma ,q_{0},\delta ,A) $ where

$ Q $ is a finite set of states.
$ \Sigma $ is a finite set of input symbols.
$ q_{0}\in Q $ is the initial state.
$ A\subseteq Q $ is the set of accepting states.
$ \delta :Q\times \Sigma \rightarrow Q $ is the state transition function.

Counting Transitions

The transition function acts as an automaton behavioral rulebook. Based upon the automaton's current state and input symbol, the transition function dictates what state the automaton should next be in. Thus, a useful question to ask when studying the behavior of any finite automaton (FA) is: How many unique transition functions can I define for a particular FA?

Above, we defined the transition function for an FA as $ \delta :Q\times \Sigma \rightarrow Q $. $ Q $ is the set of states the FA can be in. $ \Sigma $ is the set of symbols the FA can accept as input. By dissecting this definition, we shall count how many different ways there are to define the function $ \delta $.

The domain of the function says that our input is in $ Q\times \Sigma $. Since $ Q $ and $ \Sigma $ are finite, the number of possible inputs is $ \left\vert Q\times \Sigma \right\vert =\left\vert Q\right\vert \cdot \left\vert \Sigma \right\vert $. To count the number of unique ways $ \delta $ can be defined, we need to count the number of different ways we can specify what output any particular input could have. The range of $ \delta $ tells us that any input can have $ \left\vert Q\right\vert $ possible outputs. In other words, for some $ q_{i}\in Q $ and $ \sigma _{j}\in \Sigma $:

$\displaystyle \delta (q_{i},\sigma _{j})=\left\{ \begin{array}{c}
q_{1}\\
q_{2}\\
\vdots \\
q_{\left\vert Q\right\vert }
\end{array}\right. $

Thus, we have a choice of $ \left\vert Q\right\vert $ different transitions that can occur for each of the $ \left\vert Q\right\vert \cdot \left\vert \Sigma \right\vert $ inputs, or, $ \left\vert Q\right\vert ^{\left\vert Q\right\vert \left\vert \Sigma \right\vert } $ possible transition functions. We record this result as

Proposition 1   For any given FA, $ M=(Q,\Sigma ,q_{0},\delta ,A) $, the number of uniquely definable transition functions is

$\displaystyle \tau =\left\vert output\right\vert ^{\left\vert input\right\vert ...
...ft\vert Q\right\vert ^{\left\vert Q\right\vert \left\vert \Sigma \right\vert }.$

This counting problem can be reformulated as: For a given alphabet, how many words of a particular length are there? In the previous case, our word length was $ \left\vert Q\right\vert \cdot \left\vert \Sigma \right\vert $ (the number of possible inputs) and our alphabet was the set of output symbols (having $ \left\vert Q\right\vert $ ``letters''). Thus, we would be choosing $ \left\vert Q\right\vert \cdot \left\vert \Sigma \right\vert $ different letters for each position in our word $ \left\vert Q\right\vert $ times. Therefore, there are $ \left\vert Q\right\vert ^{\left\vert Q\right\vert \left\vert \Sigma \right\vert } $ different words that can be written down.

Example   Using our Roman alphabet, we can make $ 26^{4} $ 4-letter words.

This analogy constructs a bijective mapping from the set of possible transition functions to a set of strings.

Proposition 2   For any given FA, $ M=(Q,\Sigma ,q_{0},\delta ,A) $, there exists a bijective mapping, $ \psi $, from the set of all possible transition functions $ \delta \in \Delta $ to the set of all strings of length $ \left\vert Q\right\vert \cdot \left\vert \Sigma \right\vert $ over the alphabet $ Q $.

$\displaystyle \psi :\Delta \longrightarrow Q^{\left\vert Q\right\vert \left\vert \Sigma \right\vert }$

Two Counting Examples

To get a better understanding of the size of $ \Delta $, we shall calculate the number of possible transition functions for a couple of example automata. The important point to remember is that the function $ \tau $, having exponential growth, creates a strong bounding condition on the complexity of the automata one can consider for practical, computational exploration.

Example   A binary state binary reader.

For this automaton, $ Q=\{0,1\} $ and $ \Sigma =\{0,1\} $. Thus, the number of unique transition functions we could define for this FA would be $ \tau =2^{(2)(2)}=2^{4}=16 $.

Example   A decimal state Roman alphabet reader.

For this automaton, $ Q=\{0,1,2,...,7,8,9\} $ and $ \Sigma =\{A,B,C,...X,Y,Z\} $. Thus, the number of unique transition functions we could define for this FA would be $ \tau =10^{(10)(26)}=10^{260} $.

Turing Machines and Universal Computation

What follows is a brief description of Turing machines and universality. The goal of this section is to describe what is meant by the phrase ``universal computation''. We shall avoid an in-depth discussion of Turing machine construction and operation as well as most of the interesting theories that surround and grow out of them. For a proper treatment of this most fascinating material, J. C. Martin's book [9] is recommended.

Brief Overview

Like a finite automaton, a Turing machine is a simple computing device consisting of a finite set of input states, finite set of input symbols, an initial state, and a transition function. We choose at least one of the input states to be special, calling it the halt state, and define our transition function such that there are no more possible transitions once the machine reaches the halt state. Unlike a finite automaton, a Turing machine's input data is said to be given on a finite, but unbounded, strip of magnetic tape. A tape head connected to the Turing machine is placed at the beginning of this input data and has the ability to move forwards and backwards along the tape. In addition, Turing machines have a finite set of tape symbols which they can both read and write to the tape via the tape head.

Dr. Alan Turing wrote a thesis which argued that Turing machines are as powerful as the most powerful computing device possible. In other words, no matter how fast or complex our computing devices become, they will have no more capabilities (power) than a Turing machine.

A special form of Turing machine is the universal Turing machine. Universal Turing machines have the ability to simulate any other Turing machine when given a description and set of input data for that other Turing machine.

One view of the relationship between Turing machines and universal Turing machines is that a Turing machine is like a single computer program and a universal Turing machine is like an operating system. In this view, a Turing machine is a defined algorithm acting on input data. A word processor, taking text input and outputting a formatted document, is a good example of a Turing machine. On the other hand, a universal Turing machine has the ability to ``run'' any other Turing machine along with its associated input. This is just as our home computers have the ability to simulate any other computing device that has been encoded as software. A prime example is running software that simulates a pocket calculator or word processor.

When we say a computational device is universal, we typically mean that the device has the capability to simulate any other computational device. A common method for proving a device is universal is to show that it is capable of simulating another device already known to be universal. For example, if you were to show a system capable of simulating the components necessary and sufficient to build a computer (gates, timers, etc.) then you would have shown that system to be a universal computer.


Decidability and the Halting Problem

One very important result obtained by Dr. Turing was proof that there exist completely deterministic processes for which it is impossible to decide whether or not they will complete. In terms of Turing machines, his proof implies that given a Turing machine and some input data, it is impossible to decide whether or not that Turing machine will halt. This result should not seem so surprising given the existence of such problems like the Three-Body Problem wherein you know all the details of the mathematical algorithms used to compute the system's evolution, yet are unable to give a general solution for how the system will evolve. One could say that there is a certain sensitivity to the system's initial configuration. This notion will be further explored in section [*].

In the course of the following Halting Problem proof and discussion there may be unfamiliar notation. We take the time now to describe what is meant:

Notation 1   If $ T $ is a Turing machine, then $ e(T) $ is that Turing machine encoded in a format readable by the Turing machine receiving it as input. This is how we pass the description of one Turing machine to another.

Notation 2   The symbol $ \Delta $ indicates a blank position on a Turing machine tape.

Notation 3   An underscore underneath another symbol, as in $ \underline{\Delta }10110 $, indicates the position of a Turing machine tape head. We use this symbol because it is customary for a Turing machine to end its processing with tape head positioned on the blank symbol before the beginning of its output data.

Notation 4   We say a Turing machine halts, if, in the course of processing its input data, the Turing machine enters into the halt state and has no further transitions.

We first give proof of the Halting Problem by constructing a machine capable of solving the Halting Problem and then showing that if such a machine existed it would generate a paradox. This proof is very similar in flavor to Cantor's diagonalization arguments.

Theorem   The Halting Problem is undecidable.

Proof. Assume the Halting Problem is decidable. Then we can construct a Turing machine, $ T_{H} $, such that given an input $ (e(T),x) $, where $ e(T) $ is the description of another Turing machine $ T $ and $ x $ is the input string for $ T $: running $ T_{H} $ on $ (e(T),x) $ produces output `` $ \underline{\Delta }1 $'' if $ T $ halts on $ x $, or `` $ \underline{\Delta }0 $'' if $ T $ does not halt on $ x $.

We construct a universal Turing machine, $ T_{U} $, which takes as input a string $ e(T) $, and subsequently runs $ T_{H} $ with input $ (e(T),e(T)) $. We define $ T_{U} $ such that if $ T_{H} $ outputs `` $ \underline{\Delta }1 $'' then $ T_{U} $ should transition to an infinite loop and if $ T_{H} $ outputs `` $ \underline{\Delta }0 $'' then $ T_{U} $ transitions to the halt state.

What will happen if we ran $ T_{U} $ with input string $ e(T_{U}) $? This will run $ T_{H} $ on $ (e(T_{U}),e(T_{U})) $. There are two well defined possibilities for what $ T_{U} $ will do: either it will enter into an infinite loop or it will halt. We look at these outcomes:

$ T_{U} $ loops on $ e(T_{U} $) $ \Rightarrow $ $ T_{H} $ output `` $ \underline{\Delta }1 $'' $ \Rightarrow $ $ T_{U} $ halts on $ e(T_{U}) $ $ \Rightarrow $ $ T_{U} $ halts on $ e(T_{U} $) $ \Rightarrow $ $ T_{H} $ output $ \underline{\Delta }0 $ $ \Rightarrow $ $ T_{U} $ loops on $ e(T_{U}) $. A paradoxical cycle.

Since both cases end in a contradiction, our initial assumption that $ T_{H} $ is decidable must be incorrect. The Halting Problem is undecidable. $ \qedsymbol$

In this next example, we show that if the Halting Problem was solvable, then we could give a simple solution to the unsolved Goldbach conjecture. The Goldbach conjecture states that any given even number greater than 2 can be written as the sum of two primes. For example, $ 4=2+2 $, $ 6=3+3 $, $ 8=5+3 $, $ 42=19+23 $, $ 140=23+117 $.

Corollary   If the Halting Problem was decidable then we could construct a machine to solve the Goldbach conjecture.

Proof. Assume the Halting Problem is decidable. Then we can construct a Turing machine, $ T_{H} $, such that given an input $ (e(T),x) $, where $ e(T) $ is the description of another Turing machine $ T $ and $ x $ is the input string for $ T $: running $ T_{H} $ on $ (e(T),x) $ produces output `` $ \underline{\Delta }1 $'' if $ T $ halts on $ x $, or `` $ \underline{\Delta }0 $'' if $ T $ does not halt on $ x $.

We construct a Turing machine, $ T_{G} $, which determines whether or not an input even number greater than 2 can be written as the sum of two primes. This is a simple computable process which can be described as follows:

  1. Copy the input number to a new place, call it $ x $.
  2. Create a second number $ y=0 $.
  3. In a loop do the following:

    1. Subtract one from $ x $ and add it to $ y $.
    2. If $ x $ is less than half of the input number then we have exhausted all possible prime number candidates and the input number fails the Goldbach conjecture: output `` $ \underline{\Delta }0 $'' and halt.
    3. If $ x $ and $ y $ are both prime then the input number is the sum $ x+y $ and it passes the Goldbach conjecture: output `` $ \underline{\Delta }1 $'' and halt.
The above process could easily be implemented in a variety of programming languages on a home computer, so we need not worry whether such a Turing machine could exist.

We construct another Turing machine, $ T_{test} $, which runs $ T_{G} $. $ T_{test} $ first gives $ T_{G} $ the number 4. $ T_{test} $ then enters the following loop:

  1. If $ T_{G} $ should output `` $ \underline{\Delta }0 $'' then halt.
  2. Run $ T_{G} $ with the next higher even number.
Notice, if $ T_{G} $ should never find an even number that fails the Goldbach conjecture, then $ T_{test} $ will run forever. A prime target for the Halting Problem.

Run $ T_{H} $ on $ T_{test} $. If $ T_{H} $ produces `` $ \underline{\Delta }1 $'' then we know the Goldbach conjecture failed for some even number; the Goldbach conjecture must be false. If $ T_{H} $ produces `` $ \underline{\Delta }0 $'' then we know that $ T_{test} $ runs forever; the Goldbach conjecture must be true.

Since all machines used must exist, we must be able to create the above composite machine to solve the Goldbach conjecture. $ \qedsymbol$

Proofs of this sort give an intuitive argument as to why the Halting Problem is undecidable.



Footnotes

... language1
The language recognized by finite state automata is known as a regular language. Regular languages appear as the simplest language in Chomsky's language hierarchy. For more information see J. C. Martin's book [9].

next up previous
Next: Theory of Cellular Automata Up: Computation, Dynamics and the Previous: List of Figures
Jeremy Avnet (brainsik); Senior Thesis, Mathematics; University California, Santa Cruz; 6th June 2000; email: jeremy (at) theory.org