[time 647] Re: [time 644] Worlds as Lexicon -> Possible Worlds of Kripke ...

Tue, 31 Aug 1999 06:47:31 EDT

In a message dated 8/30/99 9:08:43 AM Eastern Daylight Time,
stephenk1@home.com writes:

> Dear Bill,
> I have been thinking hard about your ideas! It occurred to me that you
> may be really on to something. I am sorry to hear of Paul's plight. :-(
> I do not earn a living from my philosophizing so I can risk being a
> heretic, but we must remember Bruno!

If given the chance, would you not recant? I suspect that if Bruno had
recanted, he would also have relinquished his right to use printing
presses. Perhaps, if you would give up your internet connection, all
would be forgiven, and you would not have to be incinerated. ;-(

> WDEshleman@aol.com wrote:
> >
> > In a message dated 8/25/99 8:01:08 PM Eastern Daylight Time,
> > stephenk1@home.com writes:
> >
> > > Are "close copies" a minority or majority? Well, there measure
> > > theoretic properties of real valued labeled worlds is "measure one",
> > > other words it is certain that if we picked a pair of worlds out of an
> > > urn that they would look almost alike! The article by F.W. Meyerstein
> > > (http://www.cs.auckland.ac.nz/CDMTCS//researchreports/089walter.pdf)
> > > a good discussion of this!
> > >
> > Stephen,
> > I have read the paper now; I sure like short papers. At first, one may
> > think that infinite sums are Lexicons, then it becomes apparent that
> > infinite sums of infinite sums are really the Lexicons; that is if each
> > inner sum represents the same information as the outer sum.
> I like them little papers too, short and to the point...
> We could discover if this were true if we could compare the two sums to
> each other, but simple isomorphism would not work since it is blind to
> ordering. Umm, this looks like it requires cohomorphism testing, which
> tests the topology to see if it is simply or multiply connected. The
> former shows that the space of points (= sums) is commutative and has
> "no holes" and the latter is non-commutative, as if the space has knots
> or wormholes in it. (Oh, BTW, Pratt has something about that in his
> papers!)

Let's keep it simple Stephen...The way to compare to sums (of positive
is to normalize (average) each sum by dividing each term by the appropriate
sum. If the averaged terms are equal or if a permutation of average terms is
then the two sums are equivalent. Eg., 1 + 2 + 3 = 6 is equivalent to
30 + 20 + 10 = 60, because 1/6 + 2/6 + 3/6 = 1 for both. It is their
that are equivalent. And the entropy (S) of each is
S = -(1/6)* log(1/6) - (2/6)*log(2/6) - (3/6)*log(3/6); We may also say that
the two
sums are equivalent because their entropies are equal.

Now, in the case of products (of positive numbers), equivalence is suggested
by identical distributions of their normalized (averaged) logarithms. Eg.,
16 * 25 * 36 = 14400 is equivalent to 4 * 5 * 6 = 120. Note that I am not
referring to bit strings possessing the same message, it is more analogous
to the realization that a message of say 100 bits may be disguised as
a 50 bit code that needs squaring to reveal the 100 bit message. And
herein lies the source of "acceptable" differences (errors) between the
100 bit message and the square of the 50 bit "equivalent" code...we must
accept cases where the square of the 50 bit code is anywhere one bit
or so in error. Like the worlds that differ by one photon as revealed by slit
interferometers. Kind of like compression codes that don't unzip exactly.

> Since I am trying to advance the idea that our individual realities
> (read space-times) are constructions, it follow that the topology of
> such follows from some aspect of the process. More on this later. For a
> preview read:
> http://math.ucr.edu/home/baez/week40.html
> http://boole.stanford.edu/pub/ph94.ps.gz
> http://boole.stanford.edu/pub/tppp.ps.gz
> > This is where infinite products surface (sums of sums are products).
> > I had not thought of a product as a Lexicon, but it makes sense if
> > a Lexicon must have near copies of itself coded into its parts.
> > The infinite products of my study have parts (factors) that differ
> > from the the total product only by very small amounts of information.
Bits, photons, what's the difference?
> Have you thought about how the concept of neighborhood could be defined
> using infinite products? Since I see everything visually, this
> immediately popped up. :-) I noticed that the notion that "infinite
> products of my study have parts (factors) that differ from the the total
> product only by very small amounts of information" would imply that
> there is some variational principle derivable from the way that the
> products differ in their information content. (I am seeing your ideas in
> light of Pratt's!) Can we think of a metric (or ultrametric!) to measure
> how far apart a pair of factors are?

Ideally, the groups of factors are exactly equivalent or possess exactly
the same distributions of logarithms, but finite evaluations by means of
powers may have one bit errors. It is not so much how far apart the
factors are, it is how far apart the distributions are.
> > I am interested to read some of your ideas on this Lexicon topic.
> > To understand my approach to infinite products, all you need to
> > know is some algebra, and simple rules of logarithms. What may
> > appear as complex at first, is really quite simple in concept
> > accessible (probably) to philosophers and high school kids too. :-)
> I am thinking of how the lexicons relate to states of Mind qua
> information structures and how causality is defined in a "branching
> time" or "lazy binding" way. This is important as it allows for
> observers to "chance their mind", which I will argue is necessary and
> sufficient to allow free will.

I think of consciousness.much the same as I do QM. Both mind and QM
seem to have this mysterious ability to fill in gaps of knowledge of things
with multiple "what ifs?" to arrive at opinions (truth?), or in the case of QM
to reconstitute almost perfectly even though parts are lost or missing.

> Pratt says in ratmech.ps pg.9 paragraph 8: "...we find that two events
> or two states ... communicate with each other by interrogating all
> entities [read worlds!] of the opposite type. Thus event a deduces that
> it precedes event b not by broaching the matter with b directly, but
> instead by consulting the record of every state to see if there is any
> state volunteering a counterexample [a contrafactual of "has this been
> experienced before"?]. When none is found, the precedence is
> established. Conversely, when a Chu space is in state x and desires to
> pass to state y, it inquires as to whether this would undo any event
> that has already occurred. If not then the transition is allowed."

Here are some of my thoughts about change pulled from my abstract.

This proposal begins with the argument that the linear operators (x), that
dictate the evolution of the state of an object, are themselves measured in
present states (NOW), not past states (PAST). That is, if NOW = PAST + x *
PAST, then NOW/PAST = 1 + x, a trivial result allowing all values of x. On
the other hand, if it realized that it is more logical and consistent that,
NOW = PAST + x * NOW, then NOW/PAST = 1/(1 - x), a most interesting result
that prevents x from achieving unity.

The above result is supported by the overwhelming evidence that the evolution
of squared states according to Special Relativity require, NOW^2 = PAST^2 +
(v^2/c^2) * NOW^2. That is, NOW/PAST = [ 1/(1 - v^2/c^2) ]^(1/2), the Lorentz

> Note the phrase "communicate with each other by interrogating all
> entities of the opposite type" I added the "[other worlds]" since I am
> trying to point out how causality is an active process and it is
> necessary to consider all possible variations when considering even the
> simple process of motion! In your thinking, Bill, what do you see as the
> role or purpose of the infinite products?
> You make a good point about "keep it simple!" :-)

For now, I feel that "opposite types" are all possible reciprocal conjugates;
ie. if we see (1 + x), the result is actually 1/(1 - x) ~ (1 + x) for small x.

> Onward,
> Stephen

I guess I'll find out who Kripke is when I read the above papers?



This archive was generated by hypermail 2.0b3 on Sat Oct 16 1999 - 00:36:31 JST