[time 40] The Computational Complexity of LSs

Stephen P. King (stephenk1@home.com)
Sat, 20 Mar 1999 13:47:40 -0500

Dear Hitoshi and Friends,

Hitoshi Kitada wrote:
> Dear Stephen,
> A Turing machine is a local system that evolves along its local time producing
> a tape. Each square cell of a tape you write is a state of the LS at each
> corresponding time. The total history of the LS is the infinite length tape,
> and hence cannot be read by any observer due to the "infinity" of the length
> (note that it is said that Turing machine has a finite length tape, i.e. the
> effective number of holes on the tape is finite. But there is no finite method
> of judging at which point of the tape the holes end). This supports your
> statement
> >"a infinite length tape divided into square cells..." can
> >neither be read or written to by a "head which can be in in a finite
> >number of internal states"
> without any additional condition. I.e. "iff part" in your post is unnecessary.

        Interesting! This takes us into the arena of quantum computational
complexity! I have a very good paper that Peter references in one of his

Quantum Complexity Theory by Ethan Bernstein & Umesh Vazirani available


(Please indulge my lengthy quotation, there is a subtle point to be
made! :) )pg. 11-13 ibid.

        It mentions: "This model may be thought of as a quantum physical
analogue of a probabilistic TM - it has an infinite tape and a finite
state control, and the actions of the machine are local and completely
specified by this finite state control. In full generality, on any given
input a quantum TM produces a random sample from a probability

        How the 'size' of the sample is defined seems to be given by: "In full
generality, on any given input a quantum TM produces a random sample
from a probability distribution. We say that quantum TM T' simulates T
with accuracy e, if on every input x, T; outputs a sample from a
distribution which is within total variation distance e of the
corresponding distribution for T. We prove that there is a universal
quantum Tm which takes as an input the description of a quantum TM T,
time t, and input x, and simulates T(x) for time t with accuracy e. The
slowdown is polynomial in t and 1/e."


"...[it was shown] that there is an oracle relative to which there is a
problem that can be solved in polynomial time on a quantum TM, whereas
any classical probabilistic TM, which is restricted to *always* give the
correct answer, requires exponential time. However, the crucial
assumption in all the above results is that the classical TM is required
to be error-free. For all the above problems, quantum TMs exhibit *no*
advantage in running time over classical probabilistic TMs that are
allowed a small probability of outputting the wrong answer on any input.
It is well known that the error probability of such a probabilistic TM
can be reduced to be a negligible value (for example, much smaller than
the chance of hardware failure) at a very modest increase of running

"We prove that there is a oracle relative to which there is a language
that can be accepted in polynomial time by a quantum TM but cannot be
accepted in n^0(log n) time by a bounded-error classical probabilistic

        [I am very wary of these "oracles", they tend to assume a priori
information that is not constructible in finite space/time, much like
classical physics use of Cauchy hypersurfaces to fix ab initio the

"Before we can formally define a QTM, we must introduce some new
terminology... The new terminology is necessary because quantum
mechanical systems cannot be observed without without being altered. Let
us consider how can describe the computation of a PTM if we are not
allowed to observe it while it is working. Therefore, we imagine that
the PTM, T, is placed in a black box. The rules of the game are that we
know the finite state control of T, as well as its initial
configuration, and the interval of time between successive steps. What
is we must not observe are the outcomes of any coin-flips that T makes.
If we mentally keep track of T's configuration, then we would know when
it flipped its first coin, but we would not know which of the two
possible successive configurations it entered. Instead we will say that
the machine is in a linear superposition of the two configurations, and
assuming that the coin is fair, the coefficient for each configuration
is 1/2."

[this constructs a binary Farey tree of possible configurations...?!

"We shall refer to the coefficient of a configuration as its
*amplitude*. As the machine continues to compute and flip coins, it
will, in general, be in some linear superposition of its configurations.
        Given the linear superposition at one time step, how do we compute the
amplitude of a configuration in the superposition at the next time step?
First we apply the finite state control's rules separately to each
configuration in the superposition, and then we take the sum of these
contributions weighted according to the superposition's amplitudes.
Notice that this means the machine's finite state control is really
specifying a linear mapping in the space of superposition's of
configurations. This will remain true even if we give up the assumption
that the machine must toss a fair two-sided coin, and instead allow the
machine several arbitrary many-sided coins [dice! :)] Suppose we think
of the transition function of a randomized TM on state set Q and an
alphabet as

: Q x x x Q x {L, R}-[0,1]

[ = \delta = \sum]

where (p,\sigma,\tau, q, d) gives the probability that if the machine
is in state p reading a \sigma it will write a \tau, enter state q, and
move in the direction d. Then the machine's transition function is a
linear map, given by the matrix where the entry corresponding to the
configuration c_1 row and the configuration c_2 column is evaluated at
the tuple which transforms c_2 into c_1 in a single step, or 0 if no
such tuple exists. Since the number of possible configurations is
unbounded, this matrix has infinite dimension. Nevertheless, it's
finitely specified and satisfies a locality constraint. We shall refer
to this matrix as the *time evolution operator* of the PTM. Depending on
the particular machine, as the machine runs, its superposition may
easily include an exponential number of distinct configurations. But,
regardless of the machine, the linear superposition's will always
contain only non-negative amplitude and the amplitudes will always sum
to one.
Now, let's think about what happens when we open the box and observe the
machine. We will find the machine in a specific configuration - not a
linear superposition. The act of opening the box and observing the
machine, forces it to choose a configuration out of the linear
supposition - the rule is that each configuration is chosen to be
observed with probability equal to its amplitude in the linear
superposition. Next, consider the case where we are only interested in
one particular bit of the configuration, and we take a quick peek inside
the box to examine just that bit. Then we will see a one with a
probability equal to the sum of the amplitude of configurations with a
one in that position, and a zero with probability equal to the sum for
configurations with a zero."

[I must say that the act of 'locating' the coordinate of that "one bit"
must be taking into account. The Maxwell Daemon stuff is relevant.]

"We will then think of the machine as being in a linear superposition
corresponding to the distribution of configurations corresponding to the
actual bit we saw. This superposition could be computed by first erasing
the amplitudes of all configurations inconsistent with the observed
value, and then scaling up the remaining amplitudes to maintain unit
sum. [artificially preserving unitarity!] Therefore, we think of the act
of observing a bit as operating on the machine to effect this
restriction and renormalization.
Finally, note that although the superposition (distribution) of the
machine after t steps may take exponential space to write, we can sample
from it online in the obvious way. Each time our simulation's
configuration splits, we can choose a single next configuration randomly
according to the split's weights. This on-line simulation may
alternatively be achieved by observing the PTM after each step. It is
easy to see that these multiple observations leave the final output
distribution undisturbed."

        I must comment on this last statement! From what I have read of the
Quantum Zeno effect, the machine's evolution would be 'frozen' by these
on-line observations. How the authors propose to get around this I don't
know. :(
cf. etc.

        We also have Karl Svozil's paper
(http://tph.tuwien.ac.at/~svozil/publ/kraft.ps) discussing the Kraft
inequality "restricts the number of legal [consistent?] programs due to
the condition that to [a?] legal program can be the prefix of another
legal program. The Kraft inequality then states that no more than
maximally q=n^k programs can be coded by a successive array of k
identical beam splitters with n slots, corresponding to l_1 = l_2 = ...
= l_q. " and "the price of speedups of computations originating in
quantum parallelism is a corresponding attenuation of the computation of
the computational probability. In order to compensate for an exponential
decrease of execution probability, one would have to *exponentially
increase* the number of (bosonic) quanta in the beam paths. This,
however, is equivalent to the trivial solution of an arbitrarily complex
problem by the introduction of an arbitrary number of classical parallel

        Another method to attack this question is the subject of Peter Wegner's
work on "Interaction Machines." And, I must mention that he conjectures
that quantum nondeterminism (such as that discussed by Hitoshi with
respect to the EPR problem) would be well modeled by 'secondary' on-line
observer interacting with the 'same' LS via a different interface. Peter
and I have discussed this at length and hope that some progress can be
made in the future.


        I have more ideas about this subject but I'll reserve them to later...

> Best wishes,
> Hitoshi
> -----Original Message-----
> From: Stephen P. King <stephenk1@home.com>
> To: Time List <time@kitada.com>
> Date: Thursday, March 18, 1999 1:19 AM
> Subject: [time 27] Turing machines under difeomorphisms?
> >Hi all,
> >
> > A quick note: Can we think of Turing Machines as existing under the
> >diffeomorphism invariance of GR, as Hitoshi explains it? It seems to me
> >that the same conditions that prohibit clocks from being definable also
> >imply that "a infinite length tape divided into square cells..." can
> >neither be read or written to by a "head which can be in in a finite
> >number of internal states" iff those cells are infinitesimal invariant
> >under diffeomorphism transformations!
> >
> >cf. "Returning to the origin of the problem, i.e. to the idea of
> >relativity theories, a cause of the problem of time seems to lie in associating time to each point
> >which has no positive size. No clocks can reside in a sizeless point. At the stage of
> >special theory of relativity, this difficulty does not appear: Time is associated to each
> >inertial frame which can accommodate actual clocks. At the stage of general theory of
> >relativity, the field equation with the invariance postulate with respect to
> >diffeomorphisms requires one to eliminate the size of the frames in which clocks reside."
> >
> >http://www.kitada.com/

Onward to the Unknown!,


This archive was generated by hypermail 2.0b3 on Sat Oct 16 1999 - 00:29:45 JST