[time 292] Re: [time 291] Re: Expressiveness of interactive computing

Stephen P. King (stephenk1@home.com)
Sun, 09 May 1999 11:04:39 -0400

Dear Peter,

        This result goes toward my claim that Local Systems interactively
construct their space-times! :) I have some clarification questions and

Peter Wegner wrote:
> Here is a discussion of expressiveness of interactive computing which you may wish to comment on.
> The idea of defining expressiveness of finite computing agents relative to its observers (environments) rather than in terms of an absolute notion of transformation power captures the idea that expressiveness is meaningful only if it can be exercised in its environment.
> peter
> Expressiveness of Interactive Computing Agents
> The expressiveness of finite computing agents is a relative notion, defined relative to an environment. It extends the algorithmic notion of expressiveness, defined in terms of transformation power, to a notion that measures their ability to distiguish their environments. The extended notion reduces to the algorithmic notion for environments that are TMs, but differs for computing agents whose environments are SIMs or the physical world W. In particular, the extended notion is applicable to computing agents that aim to model the physical world.
> Given two finite agents M1, M2, their behavior BX(M1), BX(M2) is distinguishable relative to an environment (observer, tester) X if the exclusive-or BX(M1) XOR BX(M2) is nonempty. Members of this set are called distinguishability certificates. Finite agents are equivalent for X if their behavior cannot be distinguished by a distinguishability certificate. Agents equivalent relative to a restricted environment X1 may be distinguishable relative to a more complex environment X2, just as for people.

        What is the measure of the information contained in the
distinguishability certificates (DC), e.g. how do we quantitatively
represent the information content of DCs?
> Expressiveness of finite computing agents, formally defined by observation equivalence, measures their ability to make observational distinctions about their environment or equivalently the ability of external observers to distinguish agent behavior [WG].
> Observation equivalence: A class CS of systems and a class CE of environments determines an observation-equivalence relation E(CS, CE) such that systems S, S' in CS are equivalent iff they are indistinguishable for all e in CE. Environments e, e' in CE are equivalent if they are indistinguishable for all S in CS.

        Is there a limit on the cardinality of the CS and CS classes?
> Machines induce an equivalence relation on an environment class CE such that machine M1 is more expressive than M2 if the equivalence relation E(M1, CE) induced by M1 on CE is strictly finer than E(M2, CE). Greater expressiveness of machines implies greater distinguishability among environments, which in turn means that more expressive machines can solve larger classes of computational problems.

        Could we extend this into discussions of P=NP, in terms of quantifying
the ratio of expressiveness of a given pair of IMs?
> Environments of computing agents can be machines or the physical world. The following environment classes for finite computing agents that are progressively more demanding in specifying progressively larger classes of interactions that agents can perform:
> the class of Turing machines TM (interaction specified by I/O pairs)
> the class of SIMs (interaction specified by I/O streams)
> the class of MIMs (interaction specified by multiple I/O streams)
> the physical world W (interaction at least as demanding as MIMs)
> Behavior of agents in a TM environment is expessed by input-output pairs, while behavior in a SIM environment is expressed by sequences of observations (I/O streams) that allow more finely-grained distinctions among behaviors than single observations. Nonequivalence of two SIMs can be detected by a finite distinguishability certificate (observation sequence) but equivalence cannot be finitely detected.

        I see this last as an expression of how a given theory can only be
falsified, never proven. ;)
> Actually observed behavior BX(S) of a system S and an environment (observer) X is the intersection of system and observer behavior. Sequential interaction can be modeled as a producer/consumer relation, where X is a producer of behavior, S is a consumer of behavior, and the observed behavior BX(S) is the smaller of produced and consumed behavior. If observers or testers of an environment are TMs, then observed behaviors of any class of systems, such as interactive computing agents or the real world, will be limited to TMs.
> When the environment with which a finite agent interacts is a TM, then behavior richer than a TM cannot be detected, just as the intelligence of a human agent cannot be detected if the agent is never given tasks that require intelligence.

        This seems to speak to the limitations on observables given an
experimental apparatus. We see in QM that "what can be seen depends on
how one looks".
> Since computational models aim to express properties of real-world environments, W is the most demanding of the environments considered here. Interaction machines that interact with the real world W, have the property that E(SIM, W) is a finer equivalence relation on the real world that E(TM, W). SIMs permit a finer granularity equivalence relation for real-world environments than TMs and MIMs permit a finer granularity relation than SIMs. Agents that model behavior of real-world environments W impose an expressiveness hierarchy such that TMs, SIMs, and MIMs are progressively more expressive finite agents, able to make progressively finer distinctions about the world and solve larger classes of problems.
> BW(TMs) < BW(SIMs) < BW(MIMs)
> However, for environments that are TMs, the equivalence relation induced by SIMs is no finer than that induced by TMs, while for SIM environments, MIMs are no more expressive than SIMs. To observe a given level of expressiveness of a finite agent it is necessary to have a testing environment with at least that expressiveness. Failure to observe that computers have greater expressive power than TMs was due in large measure to the fact that observation environments (testing environments) were restricted to be TMs.

        We see this occurring in physics in the assumptions of unique
initiality and/or finality of time for all E(CS, CE) since they assume
that the *complete* class of CS has as a *unique* support a *unique*
class of CE and that such is a priori!
> Let SIMk be the class of SIMs restricted to no more than k interactions. We can show that:
> For all k>0, BW(SIMk) < BW(SIMk+1); and BW(TM) = BW(SIM1)
> The distinctions that can be made for real world environments, or environments with at least the expressive > power of SIMs, is greater for SIMk+1 than for SIMk, and TMs are at the bottom of this hierarchy. The behaviors BW(SIMk) of SIMs that can make k observations determine an infinite expression hierarchy with TMs at the bottom (corresponding to behavior BW(SIM1)).
> This may be proved by example. Consider an environment of SIMs with buffers of length k that buffer the output of their first k-1 interactions and disgorge their first output only on the kth interaction. For this environment, machines in the class SIMk can distinguish machines with buffers of length k by distinguishability certificates of length k, but cannot distinguish maong machines with bufers of length k+1. As a second example, consider an environment of machines Mk with binary inputs that print a 1 whenever the last k interactive inputs are 1 and a 0 otherwise. The shortest distinguishability certificate between Mk+! and Mk has length k+1. Note that SIMk must have a memory of at least size log(k) to track distinguishability certificates of length k, so that the state for the classes SIMk is not bounded.

        This last gets to my question of the information content of DCs. :) How
would we generically model the "memory" of a SIMk and a MIMk (if there
is such a thing ;) )? Can we speak to thermodynamic and Shannon entropy
in terms of memory "size"?
> SIMs model interactive question answering. The expressiveness hierarchy for SIMs shows that on-line questioning that makes use of follow-up questioning has greater expressiveness than off-line questioning. Ken Starr can obtain more information about questioned subjects (Clinton) by interactive than noninteractive questioning because later questions can use information not available to noninteractive questioners. Strategies that permit k+1 interactive questions can elicit more information (about Clinton) than k questions, assuming that Clinton has at least the expressive power of a SIM. The strategy used by the Senate of asking a list of 81 predetermined questions was predictably ineffective because it is an off-line strategy that corresponds to asking a single complex question with many parts.

        Is it clear that interactive question answering is such that it can't
be assumed to be an a priori observable? This, IMHO, speaks to how a
branching time formalism allows us to model a flowing time.
> Interaction during processes like driving, where looking out of the window can elicit information not available prior to the start of the journey, not only increases driving efficiency but allows a larger class of problems to be solved. When local computation of the next step of an interactive process is globally the correct thing to do, as is usually the case in driving, then NP-complete algorithmic tasks can be reduced to interactive tasks of polynomial (usually linear) complexity [We2].
> However, interaction not only dramatically reduces the complexity for this class of problems but increases the class of problems that can be solved to include temporal domains where complete information is not available prior to the computation. For domains determined by a static map, the problem of driving from A to B can be algorithmically specified but is more simply specified interactively, while dynamic domains with traffic conditions not specified in advance cannot be algorithmically specified, even in principle, and require interactive specification.
> Problems like protein folding are NP-complete as combinatorial optimization problems, but nature finds a way of naturally folding DNA strands so they always fold in the same way [Pa2]. We hypothesize that nature does not solve such problem algorithmically, but that it folds DNA strands by a robust interactive process having the property that local small interactive steps yield the same folded configuration for a large (robust) collection of initial configurations. Dynamic programming is an example of a process that yields global optima by sequences of local optimization steps. Its use as a practical model for learning reflects the fact that nature "prefers" processes, like evolution, whose local steps yield stable local optima.

        This is remarkably similar to that I am trying to communicate about
with regards to LSs; that the interactions/communications among LSs
constructs space-times. These are the mutually perceivable environments
within which observers (LSs) frame their perceptions. Al Martel and I
spoke yesterday afternoon about how the increase in knowledge content of
an LS may be related to the perception of an expanding space-time... And
perhaps cross-talk noise is the cause for the background black body
        Thank you for this result! :)

Kindest regards,


This archive was generated by hypermail 2.0b3 on Sun Oct 17 1999 - 22:10:31 JST