To: huw@mail.usyd.edu.au 10/15/2002
From: "Paul J. Werbos" <pwerbos@nsf.gov>
Subject: follow-on
Cc: luda.werbos@verizon.net,Huw.Price@ed.ac.uk
Bcc:
X-Eudora-Signature: <<No Default>>

Hi, Huw!

 

Thanks very much for the positive reply.

 

I have just now emailed Yanhua Shih, to see if he would conditionally be interested as well.

 

Attached are the two papers I mentioned, again:

 

(1) In Int'l J. Chaos and Bifurcation, October 2002, Vol. 12, No. 10, p.1.

 

(2) A condensed DISCUSSION piece, which I hope would be a starting point for a joint

paper to Nature. Just a starting point -- I would guess you have a better idea

than I do of what kind of outline would be acceptable to them.

 

---------------------------------

 

Then for some further details. Two issues, immediately -- experimental ways to

image the universe in forward time, and how to get the measurement formalisms straight.

(And part of that is trying to re-explain what I wrote in rather condensed language for Yanhua.

In writing to Yanhua and Luda, I take for granted our prior discussions and so on.)

 

---------------------------------------

 

Regarding the forwards-time imaging -- I have discussed the details a lot further with Yanhua, but I should

try to be brief and understandable for openers.

 

Temperature is not, of course, a most cost-effective way to do astronomy. I do remember, as a child, doing

ordinary-time temperature-based astronomy.... using a magnifying glass to focus an image of the sun onto paper,

burning a hole onto the paper. But the number of photons required to burn a hole is rather large. Cost of achieving adequate

resolution will be a major issue if anyone is to seriously try doing a real experiment.

 

I can think of two plausible ways to try to achieve decent efficiency in a reverse-time imaging system.

 

The ordinary way to get high efficiency imaging is to couple a complex system of lenses and mirrors

to adaptive optics and a photomultiplier detector the end -- lately they say that avalanche photodiodes are the best,

though they are based on principles quite similar to ancient photomultiplier tubes described in any good encyclopedia.

 

Ordinary lenses should be just as good at concentrating light in reverse time as in forwards time.

Adaptive optics may require some adjustment for reverse time operation, but (1) they are ultimately based on

algorithms I came up with decades ago, and I know those are invertible; (2) adaptive optics might not be

SO essential to initial tests, depending on how visible the images are... IF they exist at all...;

(3) tests in space probably would not require so much from adaptive optics in any case.

 

The real challenge lies in the raw photodetectors. Ordinary photomultipliers are "time forwards light source detectors,"

not symmetric objects. The reason is that they inject free energy to amplify the initial weak signal of an electron being freed.

If you look at the design, it just won't work in reverse time. HOWEVER: a reverse-time photomultiplier

really ought to be possible. If we set it up to make the INITIAL WEAK signal a signal of "photon emission"

(absorption in reverse time) rather than "photon absorption" (in the uaual forwards time direction)... then we can still use

ordinary avalanche methods to amplify that weak signal in forwards time, so that we can observe it.

At least, I think we should be able to.

 

How can we generate an electron transition, a weak initial detection, for this case? The obvious way would be to

use a semiconductor surface (like the present photodiodes) but with large numbers of the electrons in excited, metastable

"dark states." (There has been considerable work in quantum computing to generate materials in such states, in a controllable regular

way.) The reverse time photon source should be able to stimulate a transition to a lower energy state if focused

on such materials. Can we detect/amplify such transitions? Again, I would hope the work on quantum computing would

say something about such detectors/amplifiers. (There are some specific sources we might fall back on.

IRONICALLY: among the world leaders in such technology are people like Milburn, at University of Queensland, and Kane,

whose lab is a short walk from the house where I used to live in this area...)

 

(By the way, I would have no objection to widening the scope of co-authors as necessary to enhance credibility... if you

think we need more definition at an early stage. Again, you have a better understanding by far than I do of what it would take

to reach Nature.)

 

Another possible approach would be to use the SPDC type II phenomenon, for which Yanhua Shih is the world's lead

practitioner. The same core device he used with Bell's Theorem experiments.

 

Shih worked in the past with Klyshko, a Russian physicist who - inspired alas by DeBeauregard, not by either of us -- developed

a kind of pragmatic backwards-time optics design approach. (Even more unfortunate -- Klyshko is dead now.) 

Using that approach, he developed something which could be called a "high accuracy positional entanglement

generator (PEG)." In effect, the PEG lets you define two image planes, a "left plane" and a "right plane";

it emits two entangled photons, such that the image on the left plane exactly matches the image on the right plane.

This kind of thing allows quantum imaging and lithography applications which are far beyond the kind of technology

produced by the usual limited types of entanglement.

 

It should be possible to set up a PEG, "in front of" a telescope, such that the left channel (focused through the telescope)

has a left image plane set to infinity. On the right image plane, one could locate "photographic film" (an array

of high resolution photomultiplier diodes... whatever is best in ordinary astronomy). Then the

reverse-time astronomical source would tend to "pull photons in its direction", thereby creating entangled

photons on the right image plane.

 

**

As a practical matter, either approach would be subject to all the many glitches and design requirements

which add to the expense of ORDINARY instrumentation -- plus some questions about how to actually test

them out while the system is in progress... especially since we do not know for sure whether there

really ARE any reverse time images out there anyway!! (If there are big dark clouds anywhere near

where we see antimatter, however, that gives us one idea for where we might look... in www.google.com,

"antimatter galaxy" pulls up lots of good stuff.)

 

It is interesting to me... that the only really good excuse for NOT seeing a reverse-time region may be

a steady-state theory... so a rational mainstream person,  able to appreciate the logic of the physics, yet firmly

committed to the Big Bang (like Hawking?)... really should want this...

 

==================================================

 

And then... more on the logic.

 

I haven't studied Aharonov's original paper, but Unruh's summary in that book where you have a chapter seems plain enough.

 

The problem with Aharonov's development is NOT that he works purely with wave functions.

 

I myself started out working with PDE (and still mostly do)... but, as the 2-pager suggests, I have learned to work more

with wave functions.  They do have some tactical advantages. It is hard to persuade people to

agree to "seven impossible things before breakfast." People want to learn things one step at a time.

 

In fact:

 

A big question for a joint paper to Nature is... "How much should we give away about the simple

PDE picture at this time?" "How much do we confuse them by shifting between two pictures?

How heavy a price do we pay to 'keep it simple'?" At this time.

 

(By the way, last I checked Y. Aharonov was still alive. I have even had a few small encounters with his niece

Dorit at Berkeley, who works in quantum computing algorithms, and is generally quite pleasant. But I have

never discussed these issues with him or her.)

 

Assuming Unruh got it right... Aharonov simply gives us a way to calculate probabilities of things we do not observe,

WHEN WE ASSUME a string of measurements, EACH governed by the traditional time-forwards Copenhagen

measurement formalism.

 

His way of calculating probabilities is correct.. and it allows one to calculate the implications of two-time

boundary conditions, FOR a system governed by the time-forwards Copenhagen dynamics. But the underlying

stochastic dynamics are still time-forwards. His calculations actually LOOK a lot like

Markhov Random Field calculations... for the special case where the local joint distributions are

basically just time-forwards conditional probabilities.

 

But true symmetrization requires that p(in,out)=p(out*,in*) (where I put in "*" to represent CPT

invariance, and to remind us to be careful about phase effects in time reversal), for objects

like polarizers at room temperature used on optical-frequency light. For objects where

free energy is not being injected.

 

This would require two changes in the Aharonov approach: (1) the approach needs to expressed more explicitly

 in the more general MRF format, which ALLOWS for a more general CHOICE of measurement joint probabilities;

(2) actual models of specific measurement objects need to be devised which do more justice to the situation.

 

---

 

This modified approach can be applied in two ways, in principle. The most general approach is in fact to work

with wave functions (or density matrices). The most modern approach, they tell me, sounds a lot like the oldest

approach -- a "quantum trajectory simulation approach" (known very well to Milburn!!!) in which wave function

move smoothly, until they hit a "measurement node," at which time there is a probability distribution for what

wave function they jump to. Like Aharonov's picture. We need only replace the "jump" with

a local joint p(in,out).

 

But in some cases... a complex situation resolves itself into multiple particles, which can be treated either as independent or

as completely entangled,   in a way which allows a simpler analysis. (A classical-looking definite picture, in some cases.)

 

With Bell's Theorem...

 

to reconstruct the experiment on realistic terms, it is NECESSARY but NOT SUFFICIENT to assume

backwards-time effects, which lead to MRF statistics.

 

In fact... I have found only one way to do the job, so far, and many arguments that other choices may

be near-impossible.  (But could minor variations work? "Smoother" "nicer" probability distributions tend

to lead to convolution integrals which don't work...)

 

One must assume that the INCOMING probability dependence of a polarizer is of the form

a0*delta(theta - thetaP)+ a1*cos**2(theta-thetaP), where a0>>>a1, where thetaP is the angle of

the polarizer, and theta is the angle of the incoming light. (One can symmetrize over time by

inserting theta-out in a couple of possible ways...).

 

If one assumes that on both channels, and one assumes that the PEG has a similar

angular-deviation cos**2 term... one CAN reproduce the cos**2 functional result of the Bell

experiments. (The goal is to reproduce the entire joint-counting-rate distribution, and not merely one

weird inequality.) GHZ also works out, assuming the cos**2 kind of GHZ distribution they describe

in reporting the actual experiment. We cannot yet measure a1 for the polarizers and the PEG,

but the result does not depend on that detail, so long as one of the a1>0.

 

In this picture, the light between the PEG and the polarizers sometimes matches the angle of the left polarizer, sometimes matches

the right polarizer, and sometimes is different between the channels (each matching its own polarizer) -- with

the relative frequency of this unmeasured effect depending on the "a1" constants.

 

A problem, however, lies in how to explain some simpler, naive things... like... if we insert a polarizer

between a flashlight and a counter, why does the count decrease? Why doesn't the polarizer "elicit"

photons of its distribution, if the flashlight is to be represented as a stochastic object? Do we

need to do something awful like trying to use a density matrix and distinguish two types of probability

distribution again? Or is there another option? This question is so basic that I hesitated even to send

this email... but after all.. "collaboration" means sharing questions as well as answers, even at an

early stage...

 

So I will be thinking more about this...

 

-----

 

By the way, the "delta plus cos**2" does have SOME nice features. A bit like Rayleigh scattering.

The first-order scattering of light through the atmosphere is perpendicular to the usual direction of

propagation... it really acts like a different term, versus straight-through propagation. (And thus the

sky is blue.)

 

=====================

 

We have also had some thoughts about other experiments we could do with the PEG, but I would want to

check with Yanhua before sharing a lot of that. There are some possibilities which seem really

wild at first... but probably don't work. But there may yet be FTL and backwards-time tests which could be done,

WITHOUT assuming astronomical sources... that require much more intense analysis of the new

mathematical framework. (More than what I can easily get away with here at NSF..

in fact... I may already have problems for not doing several things I was supposed to do this morning...)

 

===

 

Best regards,

 

    Paul W.

 

P.S. I do not have electronic copies of the 1973, 1988, 1993 or 1994 papers, so far as I know. Certainly not 1973.

At xxx.lanl.gov, there is 1998, 2000 and 2002. Those are the core papers really dedicated to these issues.

 

In 1999, Kunio Yasue led the big international conference on Consciousness, hosted by the United

Natoins University (UNU) in Tokyo. He invited me to submit a chapter to the resulting book,

No Matter Never Mind, which came out form Benjamin Books just this year (March 2002, I believe).

I do have **THAT** paper/talk in electronic form. I was very disappointed that Chalmers and half a dozen

other people who also gave excellent talks did not have chapters there.

 

When I first heard Chalmers' talk, I was a bit disappointed -- this was all obvious and old stuff, I felt.

And also, what he called "hard" is no longer hard, in view of some mathematical work we have done in the interim,

which HAS been disseminated much more than the backwards time stuff (albeit still not as much as we need).

But then when I heard what OTHER people have to say -- there is a lot of need for greater dissemination even

of the obvious! Also, what looks obvious after decades of hard work is not obvious to everyone, not even to everyone

of equal intelligence.

 

Later that year, I did also have a chance to visit a smaller Arizona conference on consciousness, and that was essentially

the last I got to. I might possibly get to the next one. My views on that subject differ from those of

BOTH of the most vociferous camps, so no one has been pushing me into the limelight... but I like to believe my

basic story is more complete.

 

Actually, a few years ago Oxford even offered to publish a trade paperback jointly authored by myself and someone

who writes better, as a kind of "sequel to the Penrose books." I feel very, very guilty that I never found time

to pursue this, in a timely way. (By now, they have presumably given up.) But then again -- if I didn't

take the time to work out the math in the paper attached, who would have? Conflicting responsibilities here...