AN INTRODUCTION TO SILOSOPHY

by Lyle Lofgren
March 2010 - February 2012

[Note: The footnotes are references in case you want to learn more about some specific aspect of the discussion. Most of the links are to Wikipedia articles, which are at least a convenient place to start learning about something.]


CONTENTS:

PART I: THE NEED TO MATTER
PART II: THE OBSERVER
PART III: TANGLED UP IN QUANTA
PART IV: WEAKNESSES IN THE SILOSOPHY ARGUMENTS
APPENDIX: A BRIEF HISTORY OF QUANTUM THEORY


I've recently been reading too many books by authors I call Silosophers. The word could be interpreted to mean "science-inspired philosophers," but I was thinking more in terms of silliness. Silosophers do not study the philosophy of science, which covers the ideals, methods and limitations of the systematic search for limited objective truths. Instead, Silosophers use some aspects of scientific theories, usually Quantum Mechanics, to make grandiose claims about the nature of reality.


PART I: THE NEED TO MATTER

A major theme in Rebecca Goldstein's novel, The Mind-Body Problem1, is an urgent Need to Matter. It's pretty easy to hypothesize the origin as an evolutionary advantage : the need to matter to others close to you (e.g., family, neighbors, and co-workers) leads to co-operation and enhanced survivability for all. Reciprocation is a major factor in personal connection -- all those songs and novels testify how hard it is to want to relate to someone who doesn't care for you. Depression, whether clinical or situational (unemployment, for instance), is mainly the feeling that you don't matter to anyone.

It's a lot harder to imagine why most of us also feel the need to matter to more abstract entities: society, the earth, the Universe. There's little evidence that any of them care about us as individuals.

And then there's God. The major western religions (Judaism, Christianity, Islam) revolve around the idea that we matter to God. The first example that comes to mind is Jahveh selecting humans as his special creatures and, later, the Jews as his chosen people. Even in the Book of Job, where God behaves like a child pulling wings off a fly to see what happens, God at least cares enough about Job to torture him. Later, Zoroastrianism (circa 500 BCE) advanced the idea that good and evil are exactly evenly matched, and the actions of each individual human can tilt the balance in either direction. Some theologians claim that God was the center of the universe back then, but that's obviously not true. We live on earth, which is a location that can be the center of something. God, at the time, lived in heaven, above the sky. But the sky is everywhere above us, not a specific place. I'm told that my Swedish ancestors believed that the stars were holes made by God's walking-stick when he went for strolls. The lights are glimpses of heaven. So God lived in a place that surrounded us, but he was not at the center. We were. The idea that we have strong leverage for good or evil meant that we were crucially important to the universe. We mattered to all of existence, not just to our friends.

The first glimpse that humans might not be the center came in the 16th century, when Copernicus published his model placing the sun rather than the earth at the center of the solar system. Galileo showed that the Copernican system was correct, an action that condemned him to perdition for 400 years until he was resurrected by the Pope in 2011. In the 19th century, Darwin alleged that humans aren't even next to angels in the Great Chain of Being — we're merely anthropoids, related to chimpanzees and gorillas, although we're a much more aggressively invasive species. If you equate mattering with destruction, then we certainly matter. But we want more, and Darwin's ideas are even more unsettling than those of Copernicus. The response was either denial (Literalistic  Fundamentalism) or despair (Existential Nihilism). Neither response is appropriate for natives of a mediocre planet circling an average star in the suburbs of a nondescript galaxy.

God seemed to have lost interest in us, but we made great progress on His command to gain dominion over all the earth. We did this with Science, by trying to figure out how the universe worked, with the aim of controlling our destiny. And we were successful. Too successful. Science displaced God, but it displaced us right along with Him. Humanism, which could have taken His place, proved to be powerless against the human failings of the Seven Deadly Sins, particularly Greed. We built a global society around the exploitation of non-renewable energy sources. Assets and Power are the measure of how much a person matters, and the popular interpretation of evolution theory stresses competition rather than co-operation. If there are no atheists in foxholes, there are also no existential nihilists in executive suites. The existentialists were depressed because all they had were abstract ideas, and, in modern society, abstract ideas do not matter, unless they encourage greed.

You'd think that the intellectuals could hold despair at bay with a counter-theory: everything and everyone matters, as long as you define "mattering" in the most general and abstract formulation. Then there is no Mattering hierarchy: we all matter equally. Einstein's Theory of Relativity bolstered this position by showing that there is no privileged location or direction in the universe -- all observers are equally valid, even though their observations disagree with each other.

That theory should be comforting, but it isn't, for a couple of reasons. First, it takes away any reason to envy people who, in society's view, matter more than you, and Envy is as tenacious a Deadly Sin as Greed. Second, each of us has an inborn need to be the center of the universe. Freud embodied this need as the Id, the inner child that's never grown up. W.H. Auden (1907 - 1973) described it in his poem, September 1, 1939:

...the error bred in the bone
Of each woman and each man
Craves what it cannot have:
Not universal love,
But to be loved alone.

There's a rule in chemistry, called LeChatelier's Principle, that can be generalized to most systems, in the form:

Any change prompts an opposing reaction in the responding system, resulting in a new equilibrium.

The removal of humans from the center of the universe prompted an opposing reaction from the Id that's still going on. There are a lot of people with an urgent cosmic (as opposed to personal) need to matter. You can accept Jesus Christ as your personal God and Savior, but if you're going to talk to someone else about it, you're literally preaching to the choir. Several recent authors, Amit Goswami2, Mary Conrow Coelho3, and Robert Lanza4, Silosophers all, take a different tack: return humans to the center of the universe by invoking Quantum Theory.

Typically, Silosophers stress two confusing aspects of Quantum Theory. One is the Observer Problem. Silosophers stress that reality depends on the existence of a conscious observer.

The second characteristic they stress is Quantum Entanglement, which implies to them that we are interconnected with the universe, and, since the existence of reality depends on us and our observations, the existence of the universe depends on us. A heavy burden, indeed.


REFERENCES FOR PART I:

1. Goldstein, Rebecca. The Mind-Body Problem: A Novel. Random House 1983, Penguin Books 1993.
2. Goswami, Amit. The Self-Aware Universe: How Consciousness Creates the Material World. Tarcher-Putnam, 1993.
3. Coelho, Mary Conrow. Awakening Universe, Emerging Personhood: The Power of Contemplation in an Evolving Universe. Wyndham Hall, 2002.
4. Lanza, Robert. Biocentrism: How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe. Benbella Books, 2009.


PART II: THE OBSERVER

We suffer from the delusion that the entire universe is held in order by the categories of human thought, fearing that if we do not hold to them with the utmost tenacity, everything will vanish into chaos. -- Alan Watts, The Wisdom of Insecurity.

There's no question that consciousness is related to the personal reality that each of us experiences. But science is concerned with trying to describe a more general reality, objective rather than subjective. When we try to describe phenomena at the extreme microscopic level, the difference becomes more problematic.

The three Silosophers mentioned in Part I (Goswami, Coelho and Lanza) all agree that quantum theory requires that no phenomenon can exist without a conscious observer. They draw heavily on the parable of Schrödinger's Cat: a cat is enclosed in a box, along with a radioactive source, a radiation detector, and a source of poison gas. If the radiation source, which emits radioactive particles at random, triggers the detector, it opens a gas valve that kills the cat. Per the Silosophers, the cat is both alive and dead until the observer opens the box and looks inside, at which time it becomes either alive or dead. They draw conclusions about the grand meaning, using ponderous phrases such as, "Reality consists of a superposition of all possible conditions until the observer opens the box and collapses the wave function." Thus, the observer actually creates reality by observing. The indispensable conscious observer, of course, turns out to be human.

Following the same logic: if you draw a card from a deck but don't look at it, you only know that the probability that it's (say) the queen of spades is 1/52. If you then look at it, you know what card it is. If you now state that:
    1. before you looked at it, the card consisted of 52 superimposed probability waves;
    2. after you looked, the wave function collapsed to produce the actual card; and
    3. the face of the card didn't exist until you turned it over,
you're a Silosopher.

Such an observer is not necessary for Relativity. Einstein's observer is imaginary, a convenience for understanding his Thought Experiments. It (I'm depersonifying the observer in order to avoid English's he/she gender problem) exists in a space-time location where no real person has ever been. It is passive, and does not need consciousness: all it must do is make some measurements and communicate them to another, perhaps also inanimate, observer. A properly-programmed computer could do all the calculations, examine the result, and perform the same action as Einstein's observer, namely answer which event happened first. It takes a conscious being to assign meaning to those conclusions, but the observer itself is passive. And the cosmos exists whether or not there are any observers to marvel at, for example, the peculiarities of the interaction of gravity with light.

But if I believe the Silosophers when they say that the conscious observer is at the very center of reality, it's important to carefully define the characteristics of a proper one as distinguished from a false observer. So I went looking for a genuine, tangible one.

Aunt
                        Minnie

What about my Aunt Minnie Londquist? Aunt Minnie (1889-1987) lived in California, so I only met her a couple of times. She seemed as normal as anyone else in the family, if a bit more irascible. But she had one outstanding characteristic: she didn't believe the earth was spherical. It's not that she was a member of the Flat Earth Society -- that would have involved sociability. It's just that she was a strong believer in her own capability as an observer. She said, "Look around you. Any fool can see that the earth is mostly flat." Her skepticism about information that she didn't or couldn't observe for herself meant she never fell for goofy investment schemes. But the fact that she never saw the world the way the rest of us did, and certainly had no use for either Relativity or Quantum or any other theory, never affected her ability to live happily (i.e., irascibly) to an old age.

Maybe Aunt Minnie was too skeptical, too unwilling to see past her nose and draw conclusions from broader observations, to be a proper observer. For another candidate, we present Henry Adams (1838-1918), writing in the third person (The Education of Henry Adams), fresh from seeing a dynamo at the Paris Exhibition of 1900:

Aunt
                        Minnie

He knew not in what new direction to turn, and sat at his desk, idly pulling threads out of the tangled skein of science, to see whether or why they aligned themselves. The commonest and oldest toy he knew was the child's magnet, with which he had played since babyhood, the most familiar of puzzles. He covered his desk with magnets, and mapped out their lines of force by compass. Then he read all the books he could find, and tried in vain to make his lines of force agree with theirs. The books confounded him. He could not credit his own understanding. ... He dared not venture into the complexities of chemistry or microbes, so long as this child's toy offered complexities that befogged his mind beyond X-rays, and turned the atom into an endless variety of pumps endlessly pumping an endless variety of ethers.

Henry Adams had a lot less self-confidence than Aunt Minnie. If his magnetic field maps didn't agree with the books, it must be because he didn't understand the principles involved. A proper observer needs enough confidence to trust that its observations are correct.

Aunt
                        Minnie

Well, for confidence, we can nominate Robert A. Millikan (1868-1953), who first measured the charge of an electron. It's one of the basic units, since its value can't be derived from theory. Millikan won the 1923 Nobel Physics Prize for this measurement. The technique involved observing tiny oil drops falling in an electric field. I can testify, from repeating this experiment in a college physics lab, that it's difficult, and it definitely involves observation in the form of squinting for hours through a microscope. My lab partner and I got eyestrain headaches, but won no prizes. We had to discard a lot of bad observations to come up with a credible answer. But Millikan, in his 1913 paper, wrote:

This is not a selected group of drops, but represents all the drops experimented upon during 60 consecutive days.

That was not true. He didn't fabricate data (that's a different kind of observer), but he did discard much of it (just like we did) in order to come up with an accuracy estimate that bolstered his claim. His result was within about 0.6% of the currently-accepted number (versus the 0.2% accuracy that he claimed), so that's not too bad. Still, it, like any other scientific measurement, was in error. There are statistical rules for deciding whether an observation is likely to be valid or is so far away from the rest of the data that it can be safely discarded as an outlier, but Millikan used intuition instead. Can an experimenter who selects data without valid statistical reasons, and then denies he did it, be an authentic observer?

Certainly, self-confidence can lead to self-deception, as evidenced by the recent arguments about the existence or non-existence of Cold Fusion. The scientific community does not accept an experimental result unless it's been confirmed by independent observers using different measurement equipment. In an earlier time, Galileo was able to establish that acceleration by gravity was independent of mass by conducting experiments with balls rolling down ramps and also (perhaps) by the public demonstration of dropping two balls from the leaning tower at Pisa. Those two experiments would meet the different equipment criterion, but would fail the independent observer one. But if we don't know truth unless it's verified by at least two observers, we should be talking about observers rather than observer. And, in fact, even the simplest quantum experiment never involves just one observer, even at the same laboratory. Some published technical papers describing particle accelerator experiments list dozens of authors.

So evidently a proper observer must not only be conscious, but also highly intelligent and well educated, far beyond Hamlet's ability to tell a hawk from a handsaw. But if only scientists, with large grants to study quantum entanglement can be observers, where does that leave the rest of us? Can we have no part in the existence of the universe? If we can't observe the universe directly, but must read a book about it, doesn't that mean that we don't exist, either?

Actually, the Silosopher's Observer doesn't directly observe. It is trying to sense events that happen too quickly or too slowly, or are too large or too small, to be observed directly. It receives information from its normal five senses, but they've been pre-filtered by lots of equipment without consciousness. You cannot consciously observe either relativistic or quantum effects without, at the very least, telescopes, X-ray microscopes, super-accurate clocks, and sophisticated detectors with lots of blinking lights. All of these devices convert events we cannot observe into those that we can. Those devices use operations that cannot be directly observed, but are designed by trusted engineers who in turn used trusted design principles that cannot be directly observed, like the magnetic fields from Adams's magnets. We trust those principles because they've worked in the past. But today's a new day. We have faith that the rules didn't change overnight. And all those devices have random measurement errors: an observer can never observe perfectly, implying that an improvement in equipment accuracy causes a similar improvement in the properties of reality. As a simple example of the observer's difficulties, we have no way of understanding the color of ultra-violet light. Bees can see ultra-violet, so they know what color it is, but we can't. The best we can do is create a false-color (emphasis on false) image that shows us shapes that we'd see if we could sense the light, but it can't reproduce the experience directly. We don't observe ultra-violet, except through pain from sunburn. So how can even the smartest of us be a conscious observer of an event involving ultra-violet light or X-rays?

Further, the only way an observer can make rational sense out of an experiment is to hold as many variables as possible constant, varying only one at a time to see what the response will be. An experiment that tries to vary all the hypothesized causes at once, while measuring multiple effects, is obviously doomed to failure. Yet God didn't promise us that there would be only one cause per effect; one effect per cause; that multiple causes would be independent of each other; or even that causality, as we understand it, exists. There is no way for an observer (other than a superhuman one) to know whether some unhypothesized, and therefore uncontrolled, factor has affected the experiment. The observer has to decide which of a large number of possible causes (or, more accurately, influences) and effects to observe and which to ignore. So the observer is discarding data even before observing it. There have been instances where amateurs such as Henry Adams with his toy magnets discovered effects that the experts had overlooked. Even if the observer is making a direct visual observation and takes all possible precautions to avoid contamination of the results, the information has been pre-filtered through the observer's retina, which passes on only about 1% of the information it receives. It is then further processed by millions of subliminal internal mental processes even before the observer becomes aware of the stimulus.

Aunt Minnie

Scientists use formal principles of logic to make sense of observed phenomena. As Herbert Westren Turnbull (1885 - 1961) put it in an essay5:

Mathematics transfigures the fortuitous concourse of atoms into the tracery of the finger of God.

Kurt Gödel (1906 - 1978) showed, though, that mathematics and logic are not powerful enough to show that they themselves are valid . So there's always the possibility that the tracery we observe is a reflection of our own longing for orderliness.

When it comes to "creating the material world," It seems to me that a proper creator should be ahead of the curve, i.e., that some sort of active creative process should occur in advance of the event. But observers can only observe events that have already happened, even if it was only a few millionths of a second ago. Scientists know this when they're not trying to look for deep philosophical meaning in the universe. If particle detectors indicate that a shower of charged particles has just arrived from overhead, any physicist knows that the usual cause is highly-energetic cosmic rays from outer space. We've established this from balloon, satellite, and underground measurements. The results are consistent enough that we believe in the existence of cosmic rays. So we say that we observe cosmic rays, but we don't know for sure where they came from. We have credible evidence that they started their journey many light years ago, before any conscious observer on earth even imagined they could exist, so how could our observation create them? In other words, most of us believe in some form, however complicated, of cause-and-effect, and, further, that the causes occur earlier in time than the effects.

So far, the characteristics of a proper Conscious Observer include:

   (1) It is no one being, but an idealized average of the observations of many independent observers. But how can an idealized observer, which doesn't actually exist, create something that exists?
   (2) It makes its observations second-hand, after passing through a number of electronic or mechanical transformations to allow observation of the effect. It might observe only non-conscious computer calculations based on signals from non-conscious sensors.
   (3) It interprets these observations on the basis of a theory, preferably one that is believed by a significant number of cohorts. This interpretation involves discarding observations that the observer deems to be erroneous or irrelevant.
   (4) It observes only events that have already happened, sometimes months after the data was actually recorded.

This last characteristic means that the only thing a conscious observer can do is try to make sense of things that have already occurred. Even the exact present, not to mention the future, is unobservable. As Vladimir Nabokov (1899 - 1977) put it6:

Perhaps if the future existed, concretely and individually, as something that could be discerned by a better brain, the past would not be so seductive: its demands would be balanced by those of the future. Persons might then straddle the middle stretch of the seesaw when considering this or that object. It might be fun. ...But the future has no such reality...; the future is but a figure of speech, a specter of thought.

The Silosophers' position that nothing can exist without an observer means that the distant past couldn't have existed, either, particularly if you insist on using humans. The issue is related to the old question about whether a tree falling in an unoccupied forest makes a sound. I spent some miserable portion of my youth cutting down trees, and can testify that every tree I've ever helped fell has made a sound when it came down. All other observations ever made agree that a tree makes a noise when it falls. If I come across a fallen tree, I can reasonably assume that it made a sound (which I define as a disturbance of air molecules) when it fell. By observing the stump, I can tell whether it was sawn, chewed by beavers, or blown over. Thus, I can confidently believe in (generic, to be sure, not specific) woodsmen, beavers or storms. I can count the rings and confidently believe that it had been growing in that spot for (say) 53 years, that it had not moved there recently, and had not appeared, already fallen, just as I came upon it. I can confidently say that it made a sound when it fell, even though no conscious observer might have heard it. That belief is a basic part of my consciousness, and if my ancestors hadn't possessed it, also, I don't think they'd have survived to produce me. Similarly, two interpretations of an obviously ocean-deposited rock layer on a mountaintop is that it's a remnant of Noah's flood, or that Satan put it there to fool mankind. Another is that that particular layer didn't exist until the geologist walked up to it. A fourth possible interpretation is that it was deposited in a sea where no conscious observer, at least of the Silosophy variety, existed.


REFERENCES FOR PART II:

5. Turnbull, Herbert Westren. "The Great Mathematicians." The World of Mathematics, Vol. 1, p. 168. James R. Newman, ed. Simon & Shuster, 1956,
6. Nabokov, Vladimir. Transparent Things. p. 1. McGraw-Hill, 1972.


PART III: TANGLED UP IN QUANTA

There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact. -- Mark Twain, Life on the Mississippi.

I caught only a little bit of the March 7, 2010, Speaking of Faith radio program, with Krista Tippett interviewing Robert Wright about his book, The Evolution of God. I did, though, note an analogy she made during the interview about a hypothetical scientist who painstakingly analyzes all the paint chips of a painting, ruining the painting but learning nothing about it -- in other words, it's the pattern that counts, not the materials.

This, in turn, got me to thinking about the Silosophy books I've read that draw grand conclusions about the nature of reality from the theory of quantum entanglement. The usual example of quantum entanglement is an experiment where two electrons are generated in one location. Electrons have a characteristic called spin, and, under these conditions, if one electron has spin in a given direction, the other's spin must be in the opposite direction. One electron is then sent off somewhere else, and, some time later, the experimentalist measures the spin of his electron, which then immediately causes the other electron to have the opposite spin, an effect that occurs faster than Einstein's limit of the speed of light for transmitting information.

That's the ideal situation. Now consider three scenarios for an actual experiment:

Scenario 1: Physicist A generates the electrons and sends one down the hall to the other end of the Physics building. But just around the corner, his arch-enemy, Physicist B, intercepts the electron and measures its spin. Later, Physicist A measures the spin of his electron and claims to have achieved action at a distance by determining the spin of the second electron. But during the peer review process for the pivotal paper Physicist A submitted, Physicist B objects that he deserves the credit, because he (Physicist B) determined the spin of both electrons first.

Scenario 2: Physicist A's equipment was designed by a dyslexic, so if an electron really has Spin Up, the equipment reads "Spin Down" and vice-versa. Physicist A makes the measurement, and, being a Conscious Observer, is, per Goswami's The Self-Aware Universe, the one who determines the spin of both electrons. Therefore, the other electron down the hall must then also assume the erroneous spin state, which goes against another old principle, that two wrongs (unlike three lefts) don't make a right.

Aunt
                        Minnie

Scenario 3: Down the hall, after turning the corner, the traveling electron accidentally runs into a positron from Physicist B's experiment. The two annihilate each other to produce a photon that travels until it's absorbed by a phosphorescent glow-in-the-dark inspirational picture of Jesus guiding a flock of sheep. Later, the energy from this photon is re-emitted to enter the retina of a child who's supposed to be saying bedtime prayers. When Physicist A measures his electron, he immediately determines the spin of an electron that no longer exists, and in fact has undergone at least four important transformations since it started its voyage.

These are the sorts of scenarios that occur during real, vs. imagined, experiments, and none of these, with the exception of the picture of Jesus with his flock, have any spiritual significance.

But the Silosophers are not experimental physicists, so they can look at the big picture. Probably the most extravagant view on entanglement is by Lanza (p. 125):

Cosmologists say that everything was in contact, and born together, at the Big Bang. So even employing conventional imagery, it may even make sense that everything is in some sense an entangled relative of every other, and in direct contact with everything else, despite the seeming emptiness between them.

This is a pretty bold statement, considering that any disturbance in the experimental apparatus causes the entanglement to decohere, i.e., the particles become disentangled. The current longevity record is held by two cesium atoms that stayed entangled for 0.015 seconds7.

[Note added Apr. 11, 2012: In breaking news today, Science magazine announced that German researchers have achieved a breakthrough: they managed to entangle atoms that were across the street from each other. For more on that, see "Physicists Create First Long-Distance Quantum Link", by Jim Heirbaut.]

A somewhat more sensible statement of the need for an observer was made by Murray Gell-Mann, who has a Nobel Prize in physics (1969) for his theoretical work with elementary particles:

...quantum physics can accommodate an entire universe with no reference to an outside observer -- consistent histories decohere from within8.


REFERENCES FOR PART III:

7. Sanders, Laura. "Everyday Entanglement." Science News 178, #11, p. 26. (Nov. 20, 2010)
8. Siegfried, Tom. "Clash of the Quantum Titans." Ibid., p. 19.


PART IV: WEAKNESSES IN THE SILOSOPHY ARGUMENTS

The Silosophers overlooked several not-so-subtle points in their arguments:

1. Silosophers misinterpret a probabilistic result in terms of strict causality. The concept that the observer creates reality, rather than merely observing what's already there, rests on an obsolete definition of causality: that a single cause always yields a single effect. This concept is similar to the mathematical idea of a Necessary And Sufficient Condition. When applied to reality, this leads to a rigid deterministic interpretation, a linking with cause and effect that doesn't allow for any flexibility in the workings of the world. But quantum effects are statistical, so no single observation has any meaning whatever. The theory only predicts the average result of a lot of observations.

Descriptions of atomic and subatomic behavior in terms of probability clash with our intuitive belief in causality, that every effect is caused by something. Probability takes the opposite approach: all occurrences are random, and strict causality is an illusion due to the fact that some results are overwhelmingly more probable than others. This should not have been a surprise, because Ludwig Boltzmann's Statistical Mechanics (1870s), had shown that the supposedly causal laws of thermodynamics could be derived by assuming a lot of tiny particles bouncing around at random . The macroscopic causal laws are examples of what are now called Emergent Phenomena, whereby microscopic actions combine to produce a macroscopic result very different from the actions that make up the phenomena. Emergent Phenomena are typically derivable in one direction only. For example, thermodynamic laws can be derived from statistical mechanics, but statistical mechanics cannot be derived from the thermodynamic laws.

But if one can draw no conclusions from a single observation, then, per the Silosophy argument, we'd have to conclude that the observer, at the moment of observation, has not observed anything real, and therefore cannot create reality.

2. Quantum theory does not really require an observer, conscious or otherwise. All that's required to establish the average location of a particle (which is as close to reality as you can get at the microscopic level) is an interaction, such as between the nucleus of an atom and an electron orbiting it. That interpretation solves the problem of how anything existed before there were any conscious observers. Requiring interaction rather than observation also resolves the paradox of Schrödinger's cat:

In principle, the quantum description of the cat comprises both life and death, just as a rock could, in principle, simultaneously occupy different locations. But air molecules and dust particles and light beams bounce off of rocks. After a fraction of a second, only one location for the rock will be consistent with the paths of the deflected particles — a coherent wave describing multiple possibilities has thus "decohered" into just one outcome. Something similar would happen to the cat: Environmental interactions guarantee the cat to be either dead or alive before anybody looks in the box9.

3. As was stated in Part II, we can only observe a small portion of our surroundings. No human can directly sense radio waves, ultraviolet light, or ultrasound. To go further, reality itself is scale-dependent long before you get to the level where quantum effects occur, because the only way we can interpret phenomena is through the experience of our senses. An example: you can easily run your hand through water, but water is very viscous if you're the size of a paramecium; it's been compared with a human swimming through molasses. There is no way to imagine what the world would seem like if you were a dog, much less a microbe or an atom. We can observe some regularities and imagine what these other worlds would be like, but we can't really make sense of them, because we have no sensory input that would give us a real feel for the environment. A tiny particle being pummeled back and forth by Brownian Motion would have no way to tell if it is making any net motion with regard to a larger reference frame.

4. The concept of an Observer is based on a dichotomy between Observer and Observed, which in turn is related to the scientific idea of Objective vs. Subjective observations. But part of the result of Quantum Theory is the realization that the two cannot be separated, but are part of the same system. Going one step further away, and considering both Observer and Observed as part of the same experiment gets you nowhere, because now there's another, bigger, observer watching the whole thing — on and on to infinity, related to the proverbial (and maybe apocryphal) Native American concept that the world is supported by layer after layer of turtles, all the way to the bottom (but, of course, there is no bottom).

One alternative theory, originated by Kuo Hsiang (died 312 AD), that I find interesting (although unprovable) says that the universe is created, not by the observer, but by the universe itself:

I venture to ask whether the creator is or is not. If he is not, how can he create things? If he is, then (being one of these things), he is incapable of creating the mass of bodily forms ... The creating of things has no lord; everything creates itself and does not depend on anything else. This is the normal way of the universe10.

It seems obvious to me that the universe runs on transformations between energy and matter: matter is a noun and energy is a verb. To indulge in some forbidden anthropomorphic thinking, Matter says, "I like things just the way they are," and Energy says, "You've got to change." These processes have been going on forever, since long before life existed. So if we insist that reality only exists due to validation by a conscious observer, we have to conclude that every transformation that ever took place in the universe is a conscious observation made by the universe itself (Goswami), which might be true, but which also somehow seems trivial, and not the sort of idea that leads to any sort of enlightenment. I firmly believe that the universe is a mystery rather than a puzzle, the difference being that a puzzle has a solution.

Come to think of it, what's so important about the source of the material world? Atoms come and go as the building blocks move around continuously. Our cells die off, to be replaced by new ones. Nobody is physically the same person they were last year. What stays constant is the pattern, not the material. And why the pattern is the way it is remains a profound mystery.

In the end, neither I nor the Silosophers know any more about the subject than does another observer, Wallace Steven's Snowman:

One must have a mind of winter
To regard the frost and boughs
Of the pine-trees crusted with snow;

And have been cold a long time
To behold the junipers shagged with ice,
The spruces rough in the distant glitter

Of the January sun; and not to think
Of any misery in the sound of the wind,
In the sound of a few leaves,

Which is the sound of the land
Full of the same wind
That is blowing in the same bare place

For the listener, who listens in the snow,
And, nothing himself, beholds
Nothing that is not there and the
nothing that is.


REFERENCES FOR PART IV:

9. Ibid., pp. 18-19.
10. Kuo Hsiang, quoted in "Creation, Myths and Doctrines of", Encyclopedia Britannica 15th ed. Vol. 5, p. 243.


APPENDIX: A BRIEF HISTORY OF QUANTUM THEORY

At the end of the 19th century, physicists were concerned that everything of importance had been discovered, and that the only work left would be the dull job of tying up loose ends. They needn't have worried, because the 20th century brought lots of confusing discoveries. For example, light sometimes behaves as if it were made of particles rather than waves. Conversely, very small particles, such as electrons, sometimes behave as if they were waves. Theoreticians gradually came up with admittedly unsatisfying methods for describing the observed effects. Some milestones:

  • Max Planck's Black-Body Radiation Theory (1900) used the concept of a quantum of energy to explain peculiarities in radiation from hot bodies. He got the right answer if he assumed that energy could only be changed in discrete amounts, but he didn't believe that really happened.
  • Albert Einstein's explanation of the Photoelectric Effect (1905) showed that light also transferred energy to electrons only in discrete amounts.
  • Niels Bohr's model for electron behavior in the hydrogen atom (1913) hypothesized that electron orbits could only change by discrete amounts.
  • Louis de Broglie's theory on the duality of particles and waves (1924) quantified the extent to which each behaves like the other, and also explained the Bohr orbits.
  • Erwin Schrödinger's Wave Equation (1926) established a more general mathematical basis for deBroglie's effects.
  • Max Born interpreted Schrödinger's wave equation (1926) as defining the probability of a particle being in a given location .
  • Werner Heisenberg's Uncertainty Principle (1927) stated the impossibility of precisely measuring two complementary physical properties, such as location and momentum. The more precise one property is measured, the greater the error in the other measurement.
  • Einstein, Podolsky, and Rosen (1935) published a critique of the then-current interpretation of quantum theory, arguing that it's incomplete, and there must be hidden variables involved. The argument involved the Principle of Locality as applied to spin entanglement (see Part III, above).
  • John Bell (1964) showed that entanglement tests could show whether or not there were local hidden variables in the quantum formulation. Subsequent experiments indicate (but so far do not definitely prove) that Einstein, et. al., were wrong.

The wave function they're talking about is a solution to Schrödinger's equation, a partial differential equation whose solutions are wave functions, similar to light waves. Like any differential equation, this one produces no useful result unless some limiting parameters, called Boundary Conditions, are supplied. If you don't know what the limits are, you don't know the probability of finding a subatomic particle anywhere. Bohr and others developed the Copenhagen Interpretation, which posited that reality without these limits is really a superposition of all the possible probabilities, and, equating knowledge with reality, concluded that all the possibilities existed until an observation was made. Schrödinger, in 1935, tried to point out how ridiculous that idea was with his cat story. He thought it was pretty obvious that the Copenhagen Interpretation was an idiotic idea: the cat is either alive or dead, even before the box is opened.

Silosophers don't catch that Schrödinger was making a joke, and so stress that objective reality depends on the existence of a conscious observer (Part II, above).

If someone can devise further experiments involving Bell's Theorem that fill some of the gaps in the previous ones, and verify results so far, the conclusion will be that at least one of three principles ( locality, causality, or local realism) is wrong. Boltzmann had already shown that, in at least one microscopic instance, causality was not necessary. Still, most people would say that rejecting any of these three principles goes against common sense. That's not surprising, because we're exploring an area far beyond our ability to understand (cf. my principle that reality is scale-dependent, item 3 in Part IV, above).

In the 1980s, David Bohm11(1917 - 1992) hypothesized that the common-sense principles could be rescued if the entangled particles were connected through a "pilot wave" operating in a higher dimension than the usual four (location and time). In fact, he thought there might be an infinite number of higher dimensions, but that the fifth dimension was the key to the puzzling quantum results. That got rid of "spooky action at a distance" (Einstein's phrase). Unfortunately, there's no way to know if he was correct, because his work didn't lead to anything anyone could calculate or measure. Not having that available violates the most basic tenet of faith held by physicists, so Bohm's ideas have been ignored.


REFERENCES FOR APPENDIX:

11. See, for example: Bohm, David. "The Causal-Ontological Interpretation and Implicate Orders." The Essential David Bohm (Lee Nichol, editor), pp. 183 - 197. Routledge, 2003.



RETURN TO THE LOFGREN HOME PAGE