[time 402] Re: [time 400] On the Problem of Information Flow between LSs

Stephen P. King (stephenk1@home.com)
Fri, 11 Jun 1999 15:00:08 -0400

Dear Matti,

Matti Pitkanen wrote:
> On Wed, 9 Jun 1999, Stephen P. King wrote:
> > Dear Matti and Friends,
> >
> > In [time 395] Constructing spacetimes, Matti wrote:
> >
> > "There is also problem about information flow between different
> > LS:s. How can one define information current between LS:s if
> > these systems correspond to 'different spacetimes'?"
> >
> > There is much to be discussed here!
> >
> > If I am correct, "current" is defined as some quantity of change
> > occurring through a boundary of some sort.
> > (http://www.whatis.com/current.htm) It is usually assumed that some
> > particle or fluid is being transferred from one location to another and
> > a term "density" is associate with "Current per unit cross-sectional
> > area". So we are thinking of the concepts: "flow", "boundary",
> > "information", "different space-times", and "particle".
> Yes. This definition works also in infinite dimensional case.

        Ok, we should try to use this definition, but we should consider the
tacit assumptions that it brings.
> > We need definitions that are mutually consistent, I am proposing to
> > using graph theoretic concepts since we can easily generalize them to
> > continua:
> >
> > http://hissa.nist.gov/~black/CRCDict/termsArea.html#search
> >
> > Flow: "A measure of the maximum weight along paths in a weighted,
> > directed graph" We could consider the "weight" as the degree to which a
> > given edge connects a pair of vertices, e.g. if a pair of vertices are
> > identical relative to their possible labelings the weight is 1, the
> > weight is 0 if their respective sets of labels are disjoint. (When
> > considering spinors as labels of the vertices we use alternative
> > notions.)
> >
> > http://hissa.nist.gov/~black/CRCDict/HTML/flow.html
> >
> > Boundary: I can not find a concise definition so I will propose a
> > tentative one: the boundary of a graph B{G} is the minimum set of
> > vertices |V_G| that have as incident edges that connect a pair of
> > points, one of which is an element of ~{G} and the other which is an
> > element of {G}; where {G} and ~{G} are a graph and its complement.
> > I am not sure that this notion is appropriate. :( I am thinking of the
> > way which traditional set theory defines a boundary of a set: "a point
> > is in the boundary of a set iff every neighborhood of the point
> > intersects both the set and its complement". So the boundary of a set of
> > these points. It looks like the only element involved would be the empty
> > set {0} in the usual way of thinking of sets in the binary logical
> > sense, this relates to my discussion of the Hausdorff property...
> >
> Does this approach generalize simplicial cohomology? Simplicial complex
> defines homology groups. It has simplices up to dimension D if simplicial
> complex is D-dimensional. One can consider functions in the set
> simplices of given dimension. One can define co-exactness and
> co-closedness and cohomology groups. Info current would be a function in
> the set of D-1 dimensional simplices. Info current would in general
> correspond to an element of cohomology which is not coclosed. This makes
> sense only in ordinary topology defined by norm but you are talking about
> non-Hausdorff property.

        I think of it as a crude method to think about simplical cohomology! :)
I do believe that cohomology theory is the best place to look for the
tools we need to build a model of interactions! :)
        The problem is that simplexes (or more generally, complexes) are
"static" objects,e.g. "relational structures", so for the modeling of
dynamics or any "updating" of the local structures I think that there
are several options. I have been looking into "periodic gossiping" as a
basic notion to build upon, but I need assistance with the mathematics
involved. :)
        One aspect that I particularly like about what you are saying is that
the fundamental dualities that were needed to construct Chu transforms
among posets of observations ("observers") are already built into
simplical cohomology, as manifested by the "co-exactness" and
"co-completeness" properties. I highly recommend that you read over the
papers that Pratt has on his site,starting with the ones linked from the
heading! :)
        We need to start putting some of the zig-saw puzzle pieces together
before we loose track of the big picture. This will help illustrate how
the non-Hausdorffness of posets of observations occurs.
        The basic idea there is that given n >/= 2 observers having an
observable that they can communicate effectively about (the idea of
infromation flow), there will be at least one singleton \subset of the
posets of the n observers that is not disjoint. Thus the definition of
Hausdorffness is weakend... We do need to discuss this further as I am
not sure that my words are proper. :)
> > Information: Now here is the key problem: How to define "information"!
> > What is Information? Is is "meaning" as in "the semantic content of a
> > pattern of matter/energy"? Is it the bits that are recovered when a
> > string of bits is encoded or compressed by some scheme and then decoded
> > or decompressed by the scheme's inverse? Is it the value of a quantity
> > present at some arbitrary point?

> Very stimulating questions! While visiting at your homepage I
> realized for the first time how many times 'information' appeared
> there. For some mysterious reason I have managed to circumvent the
> challenge of defining this concept until now. Perhaps my
> strong opinions about computationalism explain this(;-).

        I understand how that can happen. :) I would like to better understand
what your ideas about computationalism are so that I can explain my
thinking better. :)
> a) I think that its is meaningful to talk about 'meaning'
> only if one talk about *conscious* information. OK?

        Yes. :) But, I see consciousness from a generic point of view and not
restricted to people. This sounds a bit like panpsychism, I know, but if
there is to be a scietific instead of mystical explanation of
measurement (see discussions of "Wigner's Friend" and Schroedinger's Cat
http://www.npl.washington.edu/tiqm/TI_40.html#4.3), we need to think of
measurement as "objective". I find Penrose's work to be very inspiring
toward this end! :)
> b) It is certainly impossible to characterize conscious experience
> by a bit sequence. This was one of reasons I have been very sceptic
> about 'information content of cs experience'.

        Yes! This impossibility is part of Penrose's program and is wonderfully
fixed by Peter Wegner and Vaughan Pratt's work! Thus I urge all to study
> One could however circumvent this problem! Conscious information could be
> defined as *difference* of informations associated with initial and
> final states of quantum jump. This would reduce problem to that of
> associating information measure to quantum states! Since quantum states
> correspond to a well defined geometric objects there are hopes of
> associating information measures with them! This looks a clever
> trick to me at least!(;-)

        Yes! "Conscious information could be defined as *difference* of
informations associated with initial and final states of quantum jump."
But notive that involve some very subtle situations. There is the matter
of the compressability of the information, such that there is a
relationship between the "reducibility" of the number of bits needed
(assuming a binary message for simplicity) to communicate a given
message and ability to predict what the message says before it is
"read". This implies a "strong duality" (sic) between data compresion
and "gambling"!
        Please read "Elements of Information Theory" by Thomas M. Cover and Joy
A. Thomas, Wiley-Interscience Pub. 1991, pg.136-143. The entire book is
very good!
        I also recommend the paper by Brody and Hughston "Geometric models for
Quantum Statistical Inference" pg. 265-276, in The Geometric Universe,
edited by Huggett, Mason, Tod, Tsou and Woodhouse. It arrives at
conclusions very similar to those of Frieden and, appearently, is
independent! The author's derive geometry instead of Langrangians... But
it is easy to see that they are dual!
        These should help us realize this "clever trick"! :)
> c) One can probably define several types of informations associated
> with configuration space spinor field and assign
> information measures to them. Perhaps one must give up the idea
> about single information measure. Perhaps the essential question
> is 'About what the information is about' and each question gives
> different measure of information.

        Silly question: Do you mean "spinor field configuration space" when you
say "configuration space spinor field"? There is a difference...The
former is the space of configurations of a spinor field and the latter
is something I do not understand. I know that you are using complex
projective (hyper)planes as part of the geometry of p-adic TDG, so maybe
the latter involved mapping or identifying the configurations of
particles to a spinor field?????
> d) I realized that the information associated with configuration space
> spinor field, about which I talked in previous postings,
> is essentially *information about position in configuration space*
> plus information about spin degrees of freedom relative to the
> ground state which corresponds to Fock vacuum and contains no information.

        No "information in it-self", yes. :) "There can be no observations of
self without mirrors." But, this "Fock vacuum", what is it?

> Information is defined as the information gain involved in total
> localization of the configurations space
> spinor field to single point in Fock vacuum state. Single 3-surface
> in configuration space into Fock vacuum state is selected and Shannon
> formula defines the information gain. Same works for Schrodinger
> amplutitude in nonrelativistic situation.

        This is a maximization of negentrophy or minimization of entropy, but
what kind of entropy? Mutual or "cross" entropy? Entropy, most
basically, is a measure of the equilibrium or "equivalence" between the
"parts" of a system, so there are subtleties ... If all subsets of a
systems configuration (or phase space?) are identical, the system is at
equilibrium. I think that Hitoshi's bound states are quantum mechanical
versions of this. One idea that I have is that the notion of space-time,
in the sense of distance or duration, is undefinable in such conditions!
Thus the Totality level Universe U ^T has no associated "space" or
        "Inproper" or finite (and thus distinguishable) subsets of U^T can have
space-time assosiated because they are in a state of "disequilibrium"
with at least one other improper subset of U^T.
> Critical Question: Does the information of the configuration space
> spinor field provided about the position in configuration space
> of 3-surfaces provide information about configuration space
> and spacetime geometry? There are hopes since spin degrees of freedom
> (which correspond to fermionic degrees of freedom in infinite dimensional
> context) are involved and entanglement is associated with these degrees of
> freedom. Recall that fermions describe 'reflective level of cs' in TGD
> approach to cs (Fock state basis has interpretation as Boolean algebra).

        Look at the role that Boolean algebras play in Chu spaces! :)
> e) You mentioned bit counting as a possible manner to define information.
> Interesting possibility is that real to p-adic correspondence
> could provide measure for the information content of the configuration
> space spinor field based on counting of bits, or actually pinary digits.

        Very interesting! :) Think about this: :

"Let p(x) ... be a probability density function on the real number line
[R^1], thus satisfying 0 </= p(x) </= 1 and \SUM p(x)dx = 1. If we take
the square-root density \Eta(x) \equivalent sqrt(p(x)), then \SUM \Eta^2
dx = 1 and we can regard \Eta as a point on the unit sphere S in a real
Hilbert space \H. If \rho(x) is another such square-root density
function, then we can define a 'distance' function D(\Eta, \rho) in \H
for the two distributions corresponding to \Eta(x) and \rho(x) by

        D^2(\Eta, \rho) = 1/2 \SUM [\Eta(x) - \rho(x)]^2dx . (2.1)

In this case the function D(\Eta, \rho), known as the 'Hellinger
distance', is evidently just the sine of the 'angle' made between two
Hilbert space vectors \Eta and \rho." pg. 266 ibid. in Geometric Models
for Quantum Statistical Inference ...
        What would be the p-adic version of this?! :)

> i) Pinary cutoffs of configuration space spinor field provide a
> sequence of more and more accurate discretization of configuration
> space spinor field.

        Umm, so this would be like saying that the 'mesh' size (graining) of
the resolution of observations is given by the pinary cutoffs. Look at
how Hitoshi defines the uncertainty principle in his papers!
> ii) The mapping of real configuration space spinor field to its
> p-adic counterpart involves *minimal* pinary cutoff for which
> continuation to smooth p-adic configuration space spinor
> field is possible. Minimal pinary cutoff comes from
> the requirement that the canonical image of the pinary cutoff allows
> continuation to a *smooth* p-adic configuration space spinor field.
> If pinary cutoff of the canonical image is too
> detailed, completion is not possible.
> iii) There would be thus some number N of pinary digits and
> I(X^3) =N(X^3)
> would serve as a measure for the information contained by
> the value of the configuration space spinor field at given point of
> configuration space.

        So are you saying that the measure of information (entropy) would
depend on the value of the pinary cutoff?
> iv) One could define the total information contained by configuration
> space spinor field as sum of informations associated with
> discretized configuration space.
> N= SUM_i N(X^3_i).
> This number is infinite as real integer but *finite as p-adic number*!
> Real information is obtained as the canonical image of I
> and would be finite. Higher pinary digits would
> not be be given such importance as for low pinary digits in this
> information measure. This is indeed very reasonable: lowest pinary
> digits contain the essential and the rest is just details.

        The Hausdorff dimension used to define the non-integer dimensionality
of a fractal looks like this! :)
> Note: the value of p-adic prime associated with entire universe is
> very probably infinite so that N is probably infinite as
> ordinary integer still. Note that infinities can cancel
> in info content of cs experience defined as difference.

        The value of the p-adic prime associated with U^T is infinite with a
probability of 1! But, the cardinality of this infinity is, I believe,
related to Chiatin's \Omega!
> v) This information is obviously information about the construction of
> the p-adic counterpart of configuration space spinor field from
> its real counterpart by canonical identification mapping. Is this
> information given by conscious experience? Perhaps! Conscious
> experience always involves coarse roughening: higher pinary digits
> do not have same importance as lower pinary digits. Conscious experience
> forms abstractions. So, perhaps the contents of conscious experience
> involve essentially the coarse roughening involved with reals to p-adics
> map?

        I do think so! The finiteness of conscious experiences seems to
indicate this! :)
> f) All geometric structures of real quantum TGD
> are mapped to their p-adic counterparts using phase preserving canonical
> identification map with minimal pinary cutoff.

        But is this "phase" restricted to being on a S^2 disk? Could it be the
"phase" on a higher dimensional sphere S^n? We might get a situation in
which there are more than one unitary transformation of phases, much
like there are more than one S^2 slice of S^n?
> i) This approach might work also at spacetime level
> for spinor fields defined on spacetime surface. To each spacetime
> time=constant section of spacetime surface one could associated
> information I in similar manner and pinary cutoff
> would provide the discretion of 3-surface making it possible
> to define total information as sum over informations associated
> with the points of X^3. CAnonical image would define real
> information which would be finite.

        This idea makes sense. We normally associate a space-like hypersurface
with the set of entangled states of a quantum mechanical system [I think
:) ] and can think of it as a moment of consciousness...
> ii) The mapping of real spacetime surface to its p-adic
> counterpart involves this map and one can assign the real counterpart of
> p-adic integer N of pinary digits to each point of real spacetime
> surface as its information content. Again also total
> information content could be defined as sum for the
> minimal pinary cutoff of spacetime surface.
> I think I must stop here.
> Best,
> Matti Pitkanen

        Do you have comments on the rest below?
> > Different space-times: This statement implies a plurality, a multitude
> > of configurations of distinguishable particles such that a basis of
> > three orthogonal directions is definable in conjunction with a dynamic
> > that alters the configurations in a uniform way.
> >
> > Particle: An entity that in a given reference frame or framing is
> > indivisible. It should not be assumed that an entity that is indivisible
> > in one framing need be so in another framing. I am thinking of a framing
> > as a finite context or environment that acts as a "contrast" for the
> > entity in question.
> >
> > The problem I see right away is that information is not a
> > substance in
> > the normal sense, since it has the properties of compressibility and,
> > according to Bart Kosko, irrotability, which are in contrast with those
> > properties of matter which is, usually incompressible and rotateble....
> >
> > But, I think that Peter's notions are the most relevant to this
> > conversation of "information flows" between LS, so we need a way of
> > bridging between the formalism of graph theory and the formalisms used
> > in Peter's papers.
> >
> > We'll take that up after some discussion. :)
> >
> > Later,
> >
> > Stephen



This archive was generated by hypermail 2.0b3 on Sat Oct 16 1999 - 00:36:05 JST