[time 648] Re: [time 647] Worlds as Lexicon -> Possible Worlds of Kripke ...

Stephen P. King (stephenk1@home.com)
Tue, 31 Aug 1999 12:45:10 -0400

Dear Bill,

        Maybe this will help you with the Kripke thing:


        Personally, I don't get into the minutia of modal logic, I just look at
the over all picture...

WDEshleman@aol.com wrote:
> In a message dated 8/30/99 9:08:43 AM Eastern Daylight Time,
> stephenk1@home.com writes:
> > Dear Bill,
> >
> > I have been thinking hard about your ideas! It occurred to me that you
> > may be really on to something. I am sorry to hear of Paul's plight. :-(
> > I do not earn a living from my philosophizing so I can risk being a
> > heretic, but we must remember Bruno!
> If given the chance, would you not recant? I suspect that if Bruno had
> recanted, he would also have relinquished his right to use printing
> presses. Perhaps, if you would give up your internet connection, all
> would be forgiven, and you would not have to be incinerated. ;-(
        Umm, :-)... My family is very religious so I am a "black sheep"..
Recant, never!

> > WDEshleman@aol.com wrote:
> > >
> > > In a message dated 8/25/99 8:01:08 PM Eastern Daylight Time,
> > > stephenk1@home.com writes:
> > >
> > > > Are "close copies" a minority or majority? Well, there measure
> > > > theoretic properties of real valued labeled worlds is "measure one", in
> > > > other words it is certain that if we picked a pair of worlds out of an
> > > > urn that they would look almost alike! The article by F.W. Meyerstein
> > > > (http://www.cs.auckland.ac.nz/CDMTCS//researchreports/089walter.pdf) is
> > > > a good discussion of this!
> > > >
> > > Stephen,
> > > I have read the paper now; I sure like short papers. At first, one may
> > > think that infinite sums are Lexicons, then it becomes apparent that
> > > infinite sums of infinite sums are really the Lexicons; that is if each
> > > inner sum represents the same information as the outer sum.
> > I like them little papers too, short and to the point...
> >
> > We could discover if this were true if we could compare the two sums to
> > each other, but simple isomorphism would not work since it is blind to
> > ordering. Umm, this looks like it requires cohomorphism testing, which
> > tests the topology to see if it is simply or multiply connected. The
> > former shows that the space of points (= sums) is commutative and has
> > "no holes" and the latter is non-commutative, as if the space has knots
> > or wormholes in it. (Oh, BTW, Pratt has something about that in his
> > papers!)
> Let's keep it simple Stephen...The way to compare to sums (of positive numbers)
> is to normalize (average) each sum by dividing each term by the appropriate total
> sum. If the averaged terms are equal or if a permutation of average terms is
> equal, then the two sums are equivalent. Eg., 1 + 2 + 3 = 6 is equivalent to
> 30 + 20 + 10 = 60, because 1/6 + 2/6 + 3/6 = 1 for both. It is their
> distributions that are equivalent. And the entropy (S) of each is
> S = -(1/6)* log(1/6) - (2/6)*log(2/6) - (3/6)*log(3/6); We may also say that
> the two sums are equivalent because their entropies are equal.

        Umm, this looks to say that the equivalence is under multiplication by
a scalar? Like the face on a balloon, it remains a face under arbitrary
size changes that do not rupture the balloon...
> Now, in the case of products (of positive numbers), equivalence is suggested
> by identical distributions of their normalized (averaged) logarithms. Eg.,
> 16 * 25 * 36 = 14400 is equivalent to 4 * 5 * 6 = 120. Note that I am not
> referring to bit strings possessing the same message, it is more analogous
> to the realization that a message of say 100 bits may be disguised as
> a 50 bit code that needs squaring to reveal the 100 bit message. And
> herein lies the source of "acceptable" differences (errors) between the
> original 100 bit message and the square of the 50 bit "equivalent" code...we must
> accept cases where the square of the 50 bit code is anywhere one bit
> or so in error. Like the worlds that differ by one photon as revealed by slit
> interferometers. Kind of like compression codes that don't unzip exactly.

        Umm, one key difference between fermions and bosons is compressibility.
Since compressibility is a key property of information, I do see a

> > Since I am trying to advance the idea that our individual realities
> > (read space-times) are constructions, it follow that the topology of
> > such follows from some aspect of the process. More on this later. For a
> > preview read:
> >
> > http://math.ucr.edu/home/baez/week40.html
> > http://boole.stanford.edu/pub/ph94.ps.gz
> > http://boole.stanford.edu/pub/tppp.ps.gz
> >
> > > This is where infinite products surface (sums of sums are products).
> > > I had not thought of a product as a Lexicon, but it makes sense if
> > > a Lexicon must have near copies of itself coded into its parts.
> > > The infinite products of my study have parts (factors) that differ
> > > from the the total product only by very small amounts of information.
> Bits, photons, what's the difference?

        Really, Photons are defined as null rays connecting "source and sink".
I have a thought, a visual. Can we think of the "lateral" distance
between the null rays as the distance between the points on a surface
that the null ray "penetrates". We can think of a light-cone as defined
by a ballon-like surface that "blows" up from a single point such that
the points on the surface maintain their relations. But there are
several properties that puzzle me: (a) There exist an infinite number of
points that can act as sources. (b) The relationships that occur on the
surface would exist in some way "on the point". (c) Surfaces can
interpenetrate, e.g. superpose, without interaction.
        I am thinking of information as being defined on the relations on the
surfaces, so by (a) there exist an infinite number of different
information structure. Also, by (b), I think of the "meaning" of the
information in terms of a graph (nodes and edges connecting them). This
is illustrates by thinking of a dictionary and considering word entries
as nodes and the "definition" as the adjoined nodes, e.g. the nodes that
that word has edges to. By (c), I see that the graphs can "share" nodes,
but not edges.
        Is this making sense? What I am really interested is what happens when
we consider the dual of the graphs, e.g. change nodes to edges and edges
to nodes! Note that nodes do not have "directedness" but edges can! This
looks like what happens in a supersymmetry transform of Bosons <->
Fermions! Photons are "self-dual" in their spin direction but fermions
are not...

> > Have you thought about how the concept of neighborhood could be defined
> > using infinite products? Since I see everything visually, this
> > immediately popped up. :-) I noticed that the notion that "infinite
> > products of my study have parts (factors) that differ from the the total
> > product only by very small amounts of information" would imply that
> > there is some variational principle derivable from the way that the
> > products differ in their information content. (I am seeing your ideas in
> > light of Pratt's!) Can we think of a metric (or ultrametric!) to measure
> > how far apart a pair of factors are?
> Ideally, the groups of factors are exactly equivalent or possess exactly
> the same distributions of logarithms, but finite evaluations by means of
> powers may have one bit errors. It is not so much how far apart the
> factors are, it is how far apart the distributions are.

        Yes, I agree! I see the discussion of Fisher Information by Frieden as
an attempt to get into the details of this! :-)

> > > I am interested to read some of your ideas on this Lexicon topic.
> > > To understand my approach to infinite products, all you need to
> > > know is some algebra, and simple rules of logarithms. What may
> > > appear as complex at first, is really quite simple in concept
> > > accessible (probably) to philosophers and high school kids too. :-)
> > I am thinking of how the lexicons relate to states of Mind qua
> > information structures and how causality is defined in a "branching
> > time" or "lazy binding" way. This is important as it allows for
> > observers to "chance their mind", which I will argue is necessary and
> > sufficient to allow free will.
> I think of consciousness.much the same as I do QM. Both mind and QM
> seem to have this mysterious ability to fill in gaps of knowledge of things
> with multiple "what ifs?" to arrive at opinions (truth?), or in the case of QM
> to reconstitute almost perfectly even though parts are lost or missing.

        I see a more direct link between mind and QM! Let us look at how the
"truth" value of proposition can be defined for QM. Pratt says that in
QM truth values have complex numerical values! What does that mean? Each
mind is defined in terms of "what-ifs" (to use your excellent term) that
connect a given set of stimuli and a given set of behaviors. The usual
computational formalism for this is in terms of "input-output" pairs.
(BTW, there is something weird about how to order complex numbers...)
        To relate this to the earlier notion of a ballooning surface; what acts
to connect the points on the surface to each other?
> > Pratt says in ratmech.ps pg.9 paragraph 8: "...we find that two events
> > or two states ... communicate with each other by interrogating all
> > entities [read worlds!] of the opposite type. Thus event a deduces that
> > it precedes event b not by broaching the matter with b directly, but
> > instead by consulting the record of every state to see if there is any
> > state volunteering a counterexample [a contrafactual of "has this been
> > experienced before"?]. When none is found, the precedence is
> > established. Conversely, when a Chu space is in state x and desires to
> > pass to state y, it inquires as to whether this would undo any event
> > that has already occurred. If not then the transition is allowed."
> Here are some of my thoughts about change pulled from my abstract.
> This proposal begins with the argument that the linear operators (x), that
> dictate the evolution of the state of an object, are themselves measured in
> present states (NOW), not past states (PAST). That is, if NOW = PAST + x *
> PAST, then NOW/PAST = 1 + x, a trivial result allowing all values of x. On
> the other hand, if it realized that it is more logical and consistent that,
> NOW = PAST + x * NOW, then NOW/PAST = 1/(1 - x), a most interesting result
> that prevents x from achieving unity.

        Very neat! So is unity achieved asymptotically in the infinite limit,
Lim i -> \inf. : x = 0 ? I am having a hard time with the math. :-(
(dyslexia sucks!)
> The above result is supported by the overwhelming evidence that the evolution
> of squared states according to Special Relativity require, NOW^2 = PAST^2 +
> (v^2/c^2) * NOW^2. That is, NOW/PAST = [ 1/(1 - v^2/c^2) ]^(1/2), the Lorentz
> factor.
        Ok, but what does that imply for distributions of velocities, e.g. Can
we think of worlds in terms of different NOW/PAST pairs?

> > Note the phrase "communicate with each other by interrogating all
> > entities of the opposite type" I added the "[other worlds]" since I am
> > trying to point out how causality is an active process and it is
> > necessary to consider all possible variations when considering even the
> > simple process of motion! In your thinking, Bill, what do you see as the
> > role or purpose of the infinite products?
> > You make a good point about "keep it simple!" :-)
> For now, I feel that "opposite types" are all possible reciprocal conjugates;
> ie. if we see (1 + x), the result is actually 1/(1 - x) ~ (1 + x) for small x.

        Umm, this is very interesting. I would like to understand your thinking
better! :-)

> I guess I'll find out who Kripke is when I read the above papers?

        Saul A. Kripke is a philosopher that worked on modal logic... See
Naming and Necessity, Harvard Univ Pr; ISBN: 0674598466 March 1982 and



This archive was generated by hypermail 2.0b3 on Sat Oct 16 1999 - 00:36:38 JST