Being Free

Smile! You’re at the best site ever for Being Free!


Leave a comment

What’s Next?

While some people want to focus on their purpose and goals, others want to focus on what’s next, in order to be most present in the moment.  It doesn’t hurt to have a little help from your friends.

What’s Next? for John

Host Sherry Parrish surprises John at home, by bringing him an entire new wardrobe for him to modernize his look and transform him to be more sociable and outgoing.

 

http://www.rl.tv/video/?videoID=1894466252001#


Leave a comment

Paradigm Shift

For some thought provoking questions about paradigms shifts, please read at the end of this article.

A paradigm shift (or revolutionary science) is, according to Thomas Kuhn, in his influential book The Structure of Scientific Revolutions (1962), a change in the basic assumptions, or paradigms, within the ruling theory of science. It is in contrast to his idea of normal science. According to Kuhn, “A paradigm is what members of a scientific community, and they alone, share” (The Essential Tension, 1977). Unlike a normal scientist, Kuhn held, “a student in the humanities has constantly before him a number of competing and incommensurable solutions to these problems, solutions that he must ultimately examine for himself” (The Structure of Scientific Revolutions).

Once a paradigm shift is complete, a scientist cannot, for example, reject the germ theory of disease to posit the possibility that miasma causes disease or reject modern physics and optics to posit that ether carries light. In contrast, a critic in the humanities can choose to adopt an array of stances (e.g., Marxist criticism, Freudian criticism, Deconstruction, 19th-century-style literary criticism), which may be more or less fashionable during any given period but all regarded as legitimate. Since the 1960s, the term has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events, even though Kuhn himself restricted the use of the term to the hard sciences. Compare as a structured form of Zeitgeist.

Kuhnian paradigm shifts

Kuhn used the duck-rabbit optical illusion to demonstrate the way in which a paradigm shift could cause one to see the same information in an entirely different way.

An epistemological paradigm shift was called a “scientific revolution” by epistemologist and historian of science Thomas Kuhn in his book The Structure of Scientific Revolutions.

A scientific revolution occurs, according to Kuhn, when scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made. The paradigm, in Kuhn’s view, is not simply the current theory, but the entire worldview in which it exists, and all of the implications which come with it. This is based on features of landscape of knowledge that scientists can identify around them.

There are anomalies for all paradigms, Kuhn maintained, that are brushed away as acceptable levels of error, or simply ignored and not dealt with (a principal argument Kuhn uses to reject Karl Popper‘s model of falsifiability as the key force involved in scientific change). Rather, according to Kuhn, anomalies have various levels of significance to the practitioners of science at the time. To put it in the context of early 20th century physics, some scientists found the problems with calculating Mercury’s perihelion more troubling than the Michelson-Morley experiment results, and some the other way around. Kuhn’s model of scientific change differs here, and in many places, from that of the logical positivists in that it puts an enhanced emphasis on the individual humans involved as scientists, rather than abstracting science into a purely logical or philosophical venture.

When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis, according to Kuhn. During this crisis, new ideas, perhaps ones previously discarded, are tried. Eventually a new paradigm is formed, which gains its own new followers, and an intellectual “battle” takes place between the followers of the new paradigm and the hold-outs of the old paradigm. Again, for early 20th century physics, the transition between the Maxwellian electromagnetic worldview and the Einsteinian Relativistic worldview was neither instantaneous nor calm, and instead involved a protracted set of “attacks,” both with empirical data as well as rhetorical or philosophical arguments, by both sides, with the Einsteinian theory winning out in the long-run. Again, the weighing of evidence and importance of new data was fit through the human sieve: some scientists found the simplicity of Einstein’s equations to be most compelling, while some found them more complicated than the notion of Maxwell’s aether which they banished. Some found Eddington’s photographs of light bending around the sun to be compelling, while some questioned their accuracy and meaning. Sometimes the convincing force is just time itself and the human toll it takes, Kuhn said, using a quote from Max Planck: “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

After a given discipline has changed from one paradigm to another, this is called, in Kuhn’s terminology, a scientific revolution or a paradigm shift. It is often this final conclusion, the result of the long process, that is meant when the term paradigm shift is used colloquially: simply the (often radical) change of worldview, without reference to the specificities of Kuhn’s historical argument.

Science and paradigm shift

A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism:  the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.

These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another — that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes — so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published a highly regarded essay in 1974, “On the Very Idea of a Conceptual Scheme” (Proceedings and Addresses of the American Philosophical Association, Vol. 47, (1973-1974), pp. 5–20) arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn’s claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour (see for example John Hassard, Sociology and Organization Theory: Positivism, Paradigm and Postmodernity. Cambridge University Press, 1993, ISBN 0521350344.)

Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In 1900, Lord Kelvin famously told an assemblage of physicists at the British Association for the Advancement of Science, “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.”  Five years later, Albert Einstein published his paper on special relativity, which challenged the very simple set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years.

In The Structure of Scientific Revolutions, Kuhn wrote, “Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science.” (p. 12) Kuhn’s idea was itself revolutionary in its time, as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a “paradigm shift” in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.

Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn’s model, which synthesizes his original view with the gradualist model that preceded it.

Examples of paradigm shifts in

Natural sciences

Some of the “classical cases” of Kuhnian paradigm shifts in science are:

Social sciences

In Kuhn’s view, the existence of a single reigning paradigm is characteristic of the sciences, while philosophy and much of social science were characterized by a “tradition of claims, counterclaims, and debates over fundamentals.” Others have applied Kuhn’s concept of paradigm shift to the social sciences.

  • The movement, known as the Cognitive revolution, away from Behaviourist approaches to psychological study and the acceptance of cognition as central to studying human behaviour.
  • The Keynesian Revolution is typically viewed as a major shift in macroeconomics. According to John Kenneth Galbraith, Say’s Law dominated economic thought prior to Keynes for over a century, and the shift to Keynesianism was difficult. Economists who contradicted the law, which implied that underemployment and underinvestment (coupled with oversaving) were virtually impossible, risked losing their careers. In his magnum opus, Keynes cited one of his predecessors, John Atkinson Hobson, who was repeatedly denied positions at universities for his heretical theory.
  • Later, the movement for Monetarism over Keynesianism marked a second divisive shift. Monetarists held that fiscal policy was not effective for stabilizing inflation, that it was solely a monetary phenomenon, in contrast to the Keynesian view of the time was that both fiscal and monetary policy were important. Keynesians later adopted much of the Monetarists view of the quantity theory of money and shifting Philips curve, theories they initially rejected.

Marketing

In the later part of the 1990s, ‘paradigm shift’ emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication. In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books as abused and overused to the point of becoming meaningless.

Other uses

The term “paradigm shift” has found uses in other contexts, representing the notion of a major change in a certain thought-pattern — a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:

  • M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by “paradigm” and introduces the idea of a “social paradigm”. In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as “paradigm shift.” In this respect, he focuses on the social circumstances which precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.
  • The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter‘s idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics.
  • In the arena of political science, the concept has been applied to the ethos of war. Evolutionary biologist Judith Hand, in a paper entitled “To Abolish War,” argued that a paradigm shift is possible from a global ethos that operates on the assumption that war is an inevitable aspect of human nature to a global ethos that rejects war under any circumstances.
  • Two photographs of the Earth from space, “Earthrise” (1968) and “The Blue Marble” (1972), are thought to have helped to usher in the environmentalist movement which gained great prominence in the years immediately following distribution of those images
  • Hans Küng applies Thomas Kuhn’s theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical “macromodels”: 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology and Theology for the Third Millennium: An Ecumenical View.

https://beingfreefirst.wordpress.com/wp-admin/post-new.php

Some “social” paradigm shifts taking place today may involve changing belief systems orchestrated by  politicians and others with societal power, such as the acceptance of the lack of privacy involving technology, supposedly because of terrorism– in spite of the U.S. Constitution and Bill of Rights. Technology has also opened the door for excessive personal criticism through Facebook which reaches much further than the Scarlet Letter.  Also consider the gradual acceptance of euthanasia, somewhat secretly via hospice, while others are criminalized for assisted suicide.  And, as a paradigm shift in progress, an example of waiting for those who resist to die, the current acceptance of pet tags, leading to smart cards and then planned for human implants carrying financial and medical and other information. Can a conscious minority change a prevailing shift to one without obvious negative consequences? Or with or without new science?


Leave a comment

What is Consciousness, really?

What is Consciousness?

1. The Problem of Consciousness

Conventional explanations portray consciousness as an emergent property of classical computer-like activities in the brain’s neural networks. The prevailing views among scientists in this camp are that

1) patterns of neural network activities correlate with mental states,

2) synchronous network oscillations in thalamus and cerebral cortex temporally bind information, and

3) consciousness emerges as a novel property of computational complexity among neurons.

However, these approaches appear to fall short in fully explaining certain enigmatic features of consciousness, such as:

  • The nature of subjective experience, or ‘qualia’- our ‘inner life’ (Chalmers’ “hard problem”);
  • Binding of spatially distributed brain activities into unitary objects in vision, and a coherent sense of self, or ‘oneness’;
  • Transition from pre-conscious processes to consciousness itself;
  • Non-computability, or the notion that consciousness involves a factor which is neither random, nor algorithmic, and that consciousness cannot be simulated (Penrose, 1989, 1994, 1997);
  • Free will; and,
  • Subjective time flow.

Brain imaging technologies demonstrate anatomical location of activities which appear to correlate with consciousness, but which may not be directly responsible for consciousness.

Figure 1. PET scan image of brain showing visual and auditory recognition (from S Petersen, Neuroimaging Laboratory, Washington University, St. Louis. Also see J.A. Hobson “Consciousness,” Scientific American Library, 1999, p. 65).

Figure 2. Electrophysiological correlates of consciousness.

How do neural firings lead to thoughts and feelings? The conventional (a.k.a. functionalist, reductionist, materialist, physicalist, computationalist) approach argues that neurons and their chemical synapses are the fundamental units of information in the brain, and that conscious experience emerges when a critical level of complexity is reached in the brain’s neural networks.

The basic idea is that the mind is a computer functioning in the brain (brain = mind = computer). However in fitting the brain to a computational view, such explanations omit incompatible neurophysiological details:

  • Widespread apparent randomness at all levels of neural processes (is it really noise, or underlying levels of complexity?);
  • Glial cells (which account for some 80% of brain);
  • Dendritic-dendritic processing;
  • Electrotonic gap junctions;
  • Cytoplasmic/cytoskeletal activities; and,
  • Living state (the brain is alive!)

A further difficulty is the absence of testable hypotheses in emergence theory. No threshold or rationale is specified; rather, consciousness “just happens”.

Finally, the complexity of individual neurons and synapses is not accounted for in such arguments. Since many forms of motile single-celled organisms lacking neurons or synapses are able to swim, find food, learn, and multiply through the use of their internal cytoskeleton, can they be considered more advanced than neurons?

Figure 3. Single cell paramecium can swim and avoid obstacles using its cytoskeleton.

Are neurons merely simple switches?

2. Microtubules

Activities within cells ranging from single-celled organisms to the brain’s neurons are organized by a dynamic scaffolding called the cytoskeleton, whose major components are microtubules. Hollow, crystalline cylinders 25 nanometers in diameter, microtubules are comprised of hexagonal lattices of proteins, known as tubulin. Microtubules are essential to cell shape, function, movement, and division. In neurons microtubules self-assemble to extend axons and dendrites and form synaptic connections, then help to maintain and regulate synaptic activity responsible for learning and cognitive functions. Microtubules interact with membrane structures mechanically by linking proteins, chemically by ions and “second-messenger” signals, and electrically by voltage fields.

Figure 4. Schematic view of two neurons connected by chemical synapse. Axon terminal (above) releases neurotransmitter vesicles which bind receptors on post-synaptic dendritic spine. Within neurons are visible cytoskeletal structures microtubules (“MTs” – thicker tubes) as well as actin, synapsin and others which connect MTs to membranes. Also, MT-associated proteins (“MAPs”) interconnect MTs.

Figure 5. Immunoelectron micrograph of dendritic microtubules interconnected by MAPs. Some MTs have been sheared, revealing internal hollow core. The granular “corn-cob” surface of MTs is barely evident to close inspection. Scale bar, lower left: 100 nanometers. With permission from Hirokawa, 1991.

Figure 6. Crystallographic structure of microtubules.

While microtubules have traditionally been considered as purely structural elements, recent evidence has revealed that mechanical signaling and communication functions also exist:

  • MT “kinks” travel at 15 microns (2000 tubulin subunits) per second. Vernon and Woolley (1995) Experimental Cell Research 220(2)482-494
  • MTs vibrate (100-650 Hz) with nanometer displacement. Yagi, Kamimura, Kaniya (1994) Cell motility and the cytoskeleton 29:177-185
  • MTs optically “shimmer” when metabolically active. Hunt and Stebbings (1994), Cell motility and the cytoskeleton 17:69-78
  • Mechanical signals propogate through microtubules to cell nucleus; mechanism for MT regulation of gene expression. Maniotis, Chen and Ingber (1996) Proc. Natl. Acad. Sci. USA 94:849-854
  • Measured tubulin dipoles and MT conductivity suggest MTs are ferroelectric at physiological temperature (Tuszynski; Unger 1998)

Current models propose that tubulins within microtubules undergo coherent excitation, switching between two or more conformational states in nanoseconds. Dipole couplings among neighboring tubulins in the microtubule lattice form dynamical patterns, or “automata,” which evolve, interact and lead to the emergence of new patterns. Research indicates that microtubule automata computation could support classical information processing, transmission and learning within neurons.

Figure 7. Left: Microtubule (MT) structure: a hollow tube of 25 nanometers diameter, consisting of 13 columns of tubulin dimers arranged in a skewed hexagonal lattice (Penrose, 1994). Right (top): Each tubulin molecule may switch between two (or more) conformations, coupled to London forces in a hydrophobic pocket. Right (bottom): Each tubulin can also exist (it is proposed) in quantum superposition of both conformational states.

Figure 8. Microtubule automaton simulation (from Rasmussen et al., 1990). Eight nanosecond time steps of a segment of one microtubule are shown in “classical computing” mode in which conformational states of tubulins are determined by dipole-dipole coupling between each tubulin and its six (asymmetrical) lattice neighbors. Conformational states form patterns which move, evolve, interact and lead to emergence of new patterns.

Microtubule automata switching offers a potentially vast increase in the computational capacity of the brain. Conventional approaches focus on synaptic switching at the neural level which optimally yields about 1018 operations per second in human brains (~1011 neurons/brain with ~104 synapses/neuron, switching at ~103 sec-1). Microtubule automata switching can explain some 1027 operations per second (~1011 neurons with ~107 tubulins/neuron, switching at ~109 sec-1). Indeed, the fact that all biological cells typically contain approximately 107 tubulins could account for the adaptive behaviors of single-celled organisms which have no nervous system or synapses. Rather than simple switches, neurons are complex computers.

3. Pan-experiential philosophy meets modern physics

Still, greater computational complexity and ultra-reductionism to the level of microtubule automata cannot address the enigmatic features of consciousness, in particular the nature of conscious experience. Something more is required. If functional approaches and emergence are incomplete, perhaps the raw components of mental processes (qualia) are fundamental properties of nature (like mass, spin or charge). This view has long been held by pan-psychists throughout the ages, for example Buddhists and Eastern philosophers claim a “universal mind.” Following the ancient Greeks, Spinoza argued in the 17th century that some form of consciousness existed in everything physical. The 19th century mathematician Leibniz proposed that the universe was composed of an infinite number of fundamental units, or “monads,” with each possessing a form of primitive psychological being. In the 20th century, Russell claimed that there was a common entity underlying both mental and physical processes, while Wheeler and Chalmers have maintained that there exists an experiential aspect to fundamental information.

Of particular interest is the work of the 20th century philosopher Alfred North Whitehead, whose pan-experiential view remains most consistent with modern physics. Whitehead argued that consciousness is a process of events occurring in a wide, basic field of proto-conscious experience. These events, or “occasions of experience,” may be comparable to quantum state reductions, or actual events in physical reality (Shimony, 1993). This suggests that consciousness may involve quantum state reductions (e.g. a form of quantum computation).

But what of Whitehead’s basic field of proto-conscious experience? In what medium are the “occasions of experience” (?quantum state reductions) occurring? Could proto-conscious qualia simply exist in the empty space of the universe?

What is empty space? Historically, empty space has been described as either an absolute void or a pattern of fundamental geometry. Democritus and the Michaelson-Morley results argued for “nothingness” while Aristotle (“plenum”) and Maxwell (“ether”) rejected the notion of emptiness in favor of “something” – a background pattern. Einstein weighed in on both sides of this debate, initially supporting the concept of a void with his theory of special relativity but then reversing himself in his theory of general relativity and its curved space and geometric distortions-the space-time metric. Could proto-conscious qualia be properties of the metric, fundamental space-time geometry?

What is fundamental space-time geometry? We know that at extremely small scales, space-time is not smooth, but quantized. Quantum electrodynamics and quantum field theory predict virtual particle/waves (or photons) that pop into and out of existence, creating quantum “foam” in their wake. The presence of virtual photons in space-time has been verified (Lamoreaux, 1997).

Figure 9. Quantum electodynamics (QED) predicts a foam of erupting and collapsing virtual particles which may be visualized as topographic distortions of the fabric of spacetime. Adapted from Thorne (1994) by Dave Cantrell.

Figure 10. A: The Casimir force of the quantum vacuum zero point fluctuation energy may be measured by placing two macroscopic surfaces separated by a tiny gap d1. As some virtual photons are excluded in the gap, the net “quantum foam” pressure forces the surfaces together. In Lamoreaux’s (1997) experiment, d1 was in the range of 0.6 to 6.0 microns (~1500 nanometers). B: George Hall (1996; 1997) has calculated the Casimir force on microtubules. As the force is proportional to d-4, and d2 for microtubules is 15 nanometers, the predicted Casimir force is 106 greater on microtubules (per equivalent surface area) than that measured by Lamoreaux. Hall calculates a range of Casimir force on microtubules (length dependent) from 0.5 to 20 atmospheres.

At the basic level, this granularity has been modeled by Roger Penrose as a dynamic web of quantum spins. These “spin networks” create an array of geometric volumes and configurations at the Planck scale (10-33 cm, 10-43 secs) which dynamically evolve and define space-time geometry.

Figure 11. A spin network. Introduced by Roger Penrose (1971) as a quantum mechanical description of the geometry of space, spin networks describe spectra of discrete Planck scale volumes and configurations (with permission, Rovelli and Smolin, 1995).

If spin networks are the fundamental level of space-time geometry, they could provide the basis for proto-conscious experience. In other words, particular configurations of quantum spin geometry would convey particular types of qualia, meaning and aesthetic values. A process at the Planck scale (e.g. quantum state reductions) could access and select configurations of experience.

For illustration, 4 dimensional space-time geometry is often portrayed as a 2 dimensional “space-time sheet.”

Figure 12. According to Einstein’s general relativity, mass is equivalent to curvature in spacetime geometry. Penrose applies this equivalence to the fundamental Planck scale. The motion of an object between two conformational states of a protein such as tubulin (top) is equivalent to two curvatures in spacetime geometry as represented as a two-dimensional spacetime sheet (bottom).

4. Quantum computing and consciousness

If proto-conscious information is embedded at the near-infinitesimal Planck scale, how could it be linked to biology? To begin, Penrose extends Einstein’s theory of general relativity (in which mass equates to curvature in space-time) down to the Planck scale. As a result, specific arrangements of mass are, in reality, specific configurations of space-time geometry. Events at the very small scale, however, are subject to the seemingly bizarre goings-on of quantum theory.

A century of experimental observation of quantum systems have shown that, at least at small scales, particles (mass) can exist in two or more states or locations simultaneously (quantum superposition). Penrose takes superposition (e.g. a mass in two places simultaneously) to be simultaneous space-time curvature in opposite directions – a separation, or bubble (“blister”) in underlying reality.

Figure 13. Mass superposition, e.g. a protein occupying two different conformational states simultaneously (top) is equivalent, according to Penrose, to simultaneous spacetime curvature in opposite directions – a separation, or bubble (“blister”) in fundamental spacetime geometry.

Figure 14. Spacetime superposition/separation bubble (bottom) will reduce, or collapse to one or the other spacetime curvatures (top).

Superposition and subsequent reduction, or collapse, to single, classical states may have profoundly important applications in technology, as well as toward the understanding of consciousness. In the 1980s Benioff, Feynman, Deutsch and other physicists proposed that states in a quantum system could interact (via entanglement) and enact computation while in quantum superposition of all possible states (“quantum computing”). Classical computing processes bits (or conformational states) as 1 or 0, quantum computations involve the processing of superpositioned “qubits” of both 1 and 0 (and other states) simultaneously.

Figure 15. Qubits useful in quantum computation may exist in two or more (“both”) states simultaneously prior to collapse, or reduction (left), and then in single, classical (“either, or”) states after reduction (right). Spin, quantum dots and photon polarization qubits have been proposed and/or demonstrated in prototype quantum computers, and tubulin proteins and spacetime geometry are proposed in the Orch OR model to perform as qubits also.

Quantum theory also tells us that two or more particles, if once together, will remain somehow connected (“entangled”), even when separated by great distances. Qubits can interact by quantum entanglement, so that quantum computing is able to achieve a nearly infinite parallel computational ability. Quantum computers, if they can be constructed, will be able to solve imprtant problems (e.g. factoring large numbers) with efficiency unattainable in classical computers (Shor, 1994).

Researchers have developed a “Figure of Merit” M for proposed quantum computing technologies (Modified from Barenco, 1996 & DiVincenzo, 1995). M is related to the number of elementary operations performed per qubit before the superposition/computation is disrupted by decoherence (or in the case of microtubules in the Orch OR proposal, before objective reduction terminates the superposition).

Technology

telem
(sec)
Tdecoherence
(sec)
M (operations/qubit
pre-decoherence)
Mossbauer nucleus 10-19 10-10 109
Electrons GaAs 10-13 10-10 103
Electrons Au 10-14 10-8 106
Trapped ions 10-14 10-1 1013
Optical cavities 10-14 10-5 109
Electron spin 10-7 10-3 104
Electron quantum dots 10-6 10-3 103
Nuclear spin 10-3 104 107
Superconductor islands 10-9 103 106
Microtubule tubulins 10-9 10-1 108

Results, or solutions in quantum computing are obtained when, after a period of quantum superposition/computation, the qubits “collapse”, or reduce to classical bit states (“collapse of the wave function”). As quantum superposition may only occur in isolation from environment, collapse (reduction) may be induced by breaching isolation (this is what is envisioned in technological quantum computers – making a measurement). But what about quantum superpositions which remain isolated, for example Schrodinger’s mythical cat which is both dead and alive? This is the famous problem of collapse of the wave function, or quantum state reduction.

5. Roger Penrose’s ‘objective reduction’ OR

How or why do quantum superpositioned states which avoid environmental interactions become classical and definite in the macro-world? Many physicists now believe that some objective factor disturbs the superposition and causes it to collapse. Roger Penrose proposes that this factor is an intrinsic feature of space-time geometry itself – quantum gravity. According to Penrose’s interpretation of general relativity, quantum superposition (e.g. separation of mass from itself) is equivalent to separation in underlying space-time geometry-simultaneous space-time curvatures in opposite directions. Penrose argues that these separations in fundamental reality, (“bubbles, or blisters”) are unstable-even when isolated from the environment-and will reduce spontaneously (and non-computably) to specific states at a critical threshold of space-time separation (thereby avoiding the need for “multiple worlds”). This objective threshold is defined by the indeterminacy principle:

E = h/T

where E is the gravitational self-energy of the superposed mass separated from itself, h is Planck’s constant divided by 2pi, and T is the coherence time until collapse occurs. Thus, the size and energy of a system in superposition, or the degree of space-time separation, is inversely related to the time T until reduction. (E can be calculated from the superposed mass m and the separation distance a. See e.g. Hameroff and Penrose, 1996a.)

Assuming isolation, the following masses in superposition would collapse at the designated times, according to Penrose’s objective reduction:

Mass (m) Time (T)
Nucleon 107 years
Beryllium ion 106 years
Water Speck
10-5 cm radius Hours
10-4 cm radius 1/20 second
10-3 cm radius 10-3 seconds
Schrodinger’s cat (m=1kg, a=10 cm) 10-37 seconds

If quantum computation with objective reduction were occurring in the brain, enigmatic features of consciousness (see Section above – The Problem of Consciousness) could be explained:

  • By occurring as a self-organizing process in what is suggested to be a pan-experiential medium of fundamental spacetime geometry, objective reductions could account for the nature of subjective experience by accessing and selecting proto-conscious qualia.
  • By virtue of involvement of unitary (entangled) quantum states during pre-conscious quantum computation and the unity of quantum information selected in each objective reduction, the issue of binding may be resolved.
  • Regarding the transitions from pre-conscious processes to consciousness itself, the pre-conscious processes may equate to the quantum superposition/quantum computation phase, and consciousness itself to the actual (instantaneous) objective reduction events. Consciousness may then be seen as a sequence of discrete events (e.g. at 40 Hz).
  • As Penrose objective reductions are proposed to be non-computable (reflecting influences from space-time geometry which are neither random, nor algorithmic) conscious choices and understanding may be similarly non-computable.
  • Free will may be seen as a combination of deterministic pre-conscious processes acted on by a non-computable influence.
  • Subjective time flow derives from a sequence of irreversible quantum state reductions.

Could objective reduction be occurring in the brain? If so (from E = h/T) time T would be expected to coincide with known neurophysiological processes with time scales from tens to hundreds of milliseconds (e.g. 25 msec for coherent 40 Hz, 100 msec for alpha EEG, 500 msec for sensory threshold events such as Libet’s famous 1979 experiments). In what types of brain structures might quantum computation with objective reduction occur? For T in this range we can calculate (from E = h/T, and with E related to mass m as described in Hameroff and Penrose, 1996a) that superpositioned mass m in the nanogram range would be required for conscious events of 40 to 500 msec. What brain components in nanogram quantitites could support quantum computation and objective reduction? What is m?

6. Are proteins qubits?

Biological life is organized by proteins. By changing their conformational shape, proteins are able to perform a wide variety of functions, including muscle movement, molecular binding, enzyme catalysis, metabolism, and movement. Dynamical protein structure results from a “delicate balance among powerful countervailing forces” (Voet & Voet, 1995). The types of forces acting on proteins include charged interactions (such as covalent, ionic, electrostatic, and hydrogen bonds), hydrophobic interactions, and dipole interactions. The latter group, also known as van der Waals forces, encompasses three types of interactions:

  • permanent dipole – permanent dipole,
  • permanent dipole – induced dipole, and
  • induced dipole – induced dipole (London dispersion forces)

As charged interactions cancel out, hydrophobic and dipole – dipole forces are left to regulate protein structure. While induced dipole – induced dipole interactions, or London dispersion forces, are the weakest of the forces outlined above, they are also the most numerous and influential. Indeed, they may be critical to protein function. For example, anesthetics are able to bind in hydrophobic “pockets” of certain neural proteins and ablate consciousness by virtue of disrupting these London forces. London force attraction between any two atoms is usually less than a few kilojoules; however, since thousands occur in each protein, they add up to thousands of kilojoules per mole, and cause changes in conformational structure. As London forces are instrumental in protein folding (a problem intractable to conventional computational simulation), protein conformation and folding may be quantum computations.

Figure 16. A type of van der Waals force, the London dispersion force, is quantum mechanical and governs both protein conformation.

Figure 17. A. An anesthetic gas molecule (A) in a hydrophobic pocket of critical brain protein (receptors, channels, tubulin etc.) prevents normally occurring London forces, preventing protein conformational dynamics and superposition necessary for consciousness. B. A psychedelic hallucinogen (P) acts in hydrophobic pocket in critical brain protein to promote and sustain superposition, ‘expanding’ consciousness (see Figure 25).

Figure 18. A. Protein qubit. A protein such as tubulin can exist in two conformations determined by quantum London forces in hydrophobic pocket (top), or superposition of both conformations (bottom). B. The protein qubit corresponds to two alternative spacetime curvatures (top), and superposition/separation (bubble) of both curvatures (bottom).

If proteins are qubits, arrays or assemblies of proteins in some type of organelle or biomolecular structure could be a quantum computer. Ideal structures would be:

  • Abundant;
  • Capable of information processing and computation;
  • Functionally important (e.g., regulating synapses);
  • Self-organizing;
  • Tunable by input information (e.g., microtubule-associated protein orchestration);
  • Periodic and crystal-like in structure (e.g., dipole lattice);
  • Isolated (transiently) from environmental decoherence;
  • Conformationally coupled to quantum events (e.g., London forces);
  • Cylindrical wave-guide structure; and,
  • Plasma-like charge layer coating.

While various structures/organelles have been suggested (e.g., membrane proteins, clathrins, myelin, pre-synaptic grids, and calcium ions), the most logical candidates are microtubule automata.

Figure 19. The Penrose-Hameroff Orch OR model was hatched on a hike in the Grand Canyon following the Tucson I conference in April, 1994. From left: David Chalmers, Rhett Savage, Marie-Francoise Insinna, Seamus O’Morain, Stuart Hameroff, Roger Penrose, Vanessa Penrose, Jeff Tollaksen. Photo by Ezio Insinna.

7. Microtubule quantum automata – The ‘Orch OR’ model

The Penrose – Hameroff model of “orchestrated objective reduction” (Orch OR) proposes that:

  • Quantum superposition/computation occur in microtubule automata within brain neurons and glia.;
  • Tubulin subunits within microtubules act as qubits, switching between states on a nanosecond (10-9 sec) scale governed by quantum London forces in hydrophobic pockets;
  • Tubulin qubits interact computationally by nonlocal quantum entanglement according to the Schrodinger equation;

Figure 20. The basic idea in the Orch OR model is that each tubulin in a microtubule is a qubit.

Figure 21. Microtubule automaton sequence simulation in which classical computing (step 1) leads to emergence of quantum coherent superposition (steps 2-6) in certain (gray) tubulins due to pattern resonance. Step 6 (in coherence with other microtubule tubulins) meets critical threshold related to quantum gravity for self-collapse (Orch OR). Consciousness (Orch OR) occurs in the step 6 to 7 transition. Step 7 represents the eigenstate of mass distribution of the collapse which evolves by classical computing automata to regulate neural function. Quantum coherence begins to re-emerge in step 8.


  • pre-conscious processing which continues until the threshold for objective reduction (OR) is reached by E = h/T;
  • At that instant collapse, or OR occurs which is an actual event in fundamental space-time geometry. This event selects a particular configuration of Planck-scale experiential geometry, enacting a “moment of awareness,” “occasion of experience” or conscious event.

Figure 22. Schematic graph of proposed pre-conscious quantum superposition (number of tubulins) emerging versus time in microtubules. Area under curve connects superposed mass energy E with collapse time T in accordance with E=(h/T. E may be expressed as nt, the number of tubulins whose mass separation (and separation of underlying space time) for time T will self-collapse. For T = 25 msec (e.g. 40 Hz oscillations), nt = 2 x 1010 tubulins.

Figure 23. Schematic of quantum computation of three tubulins which begin (left) in initial classical states, then enter isolated quantum superposition in which all possible states coexist. After reduction, one particular classical outcome state is chosen (right).

Figure 24. Schematic quantum computation in spacetime curvature for three mass distributions (e.g. tubulin conformations in Figure 23) which begin (left) in initial classical states, then enter isolated quantum superposition in which all possible states coexist. After reduction, one particular classical outcome state is chosen (right).


  • A sequence of OR events (e.g. at 40 Hz) provides a forward flow of subjective time and “stream” of consciousness;

Figure 25. Quantum superposition/entanglement in microtubules for 5 states related to consciousness. Area under each curve equivalent in all cases. A. Normal 30 Hz experience: as in Figure 22. B. Anesthesia: anesthetics bind in hydrophobic pockets and prevent quantum delocalizability and coherent superposition. C. Heightened Experience: increased sensory experience input (for example) increases rate of emergence of quantum superposition. Orch OR threshold is reached faster, and Orch OR frequency increases. D. Altered State: even greater rate of emergence of quantum superposition due to sensory input and other factors promoting quantum state (e.g. meditation, psychedelic drug etc.). Predisposition to quantum state results in baseline shift and collapse so that conscious experience merges with normally sub-conscious quantum computing mode. E. Dreaming: prolonged sub-threshold quantum superposition time.


  • At the nanoscale each event determines new classical states of microtubule automata which regulate synaptic and other neural functions;
  • During the pre-conscious quantum superposition/computation phase, oscillations are “tuned” and “orchestrated” by microtubule-associated proteins (MAPs), providing a feedback loop between the biological system and the quantum state (hence Orch OR);
  • Quantum states in microtubules may link to those in microtubules in other neurons and glia by tunneling through gap junctions, permitting extension of the quantum state throughout significant volumes of the brain.

Figure 26. Schematic of proposed quantum superposition and entanglement in microtubules in three dendrites interconnected by tunneling through gap junctions. Within each neuronal dendrite, microtubule-associated-protein (MAP) attachments breach isolation and prevent quantum coherence; MAP attachment sites thus act as “nodes” which tune and orchestrate quantum oscillations and set possibilities and probabilities for collapse outcomes (orchestrated objective reduction: Orch OR). Gap junctions may enable quantum tunneling among dendrites resulting in macroscopic quantum states.


From E = h/T we can calculate the size and extension of Orch OR events which correlate with subjective or neurophysiological descriptions of conscious events.

Event

T

E

Buddhist “moment of awareness” 13 ms 4 x 1015 nucleons
(4 x 1010 tubulins/cell ~ 40,000 neurons)
“Coherent 40 Hz” oscillations” 25 ms 2 x 1015 nucleons
(2 x 1010 tubulins/cell ~ 20,000 neurons)
EEG alpha rhythm (8 to 12 Hz) 100 ms 5 x 1014 nucleons
(5 x 109 tubulins/cell ~ 5000 neurons)
Libet’s sensory threshold (1979) 500ms 1014 nucleons
(109 tubulins/cell ~ 1000 neurons)

But how could delicate quantum superposition/computation be isolated from environmental decoherence in the brain (generally considered to be a noisy thermal bath) while also communicating (input/output) with the environment? One possibility is that quantum superposition/computation occurs in an isolation phase which alternates with a communicative phase, for example at 40 Hz. One of the most primitive biological functions is the transition of cytoplasm between a liquid, solution (“sol”) phase, and a solid, gelatinous (“gel”) phase due to assembly and disassembly of the cytoskeletal protein actin. Actin sol-gel transitions can occur at 40 Hz or faster, and are known to be involved in neuronal synaptic release mechanisms.

Mechanisms for enabling microtubule quantum computation and avoiding decoherence long enough to reach threshold may include:

  • Sol-gel transitions;

Figure 27. Immunoelectron micrograph of cytoplasm showing microtubules (arrows), intermediate filaments (arrowheads) and actin microfilaments (mf). Dense gel of actin (lower left) completely obscures (?isolates) microtubules. Actin sol-gel transitions can occur at 40 Hz or faster. Scale bar (upper right): 500 nanometers. With permission from Svitkina et al, 1995.


  • Plasma phase sleeves (Sackett);

Figure 28. Dan Sackett at NIH recently described a plasma-like sleeve of charged ions surrounding microtubules at precisely optimal pH.


  • Quantum excitations/ordering of surrounding water (Jibu/Yasue/Hagan);
  • Hydrophobic pockets;
  • Hollow microtubule cores;
  • Laser-like pumping, including environment (Frohlich, Conrad).
  • Quantum error correcting codes

Another apparent obstacle to the Orch OR proposal is how the weak energy involved in the gravitational collapse can be influential. For a detailed description of this problem and potential solutions, see Hameroff, 1998c. One possibility is that the gravitational self-energy is delivered to the involved tubulins via London forces virtually instantaneously (e.g. within one Planck time) so that the power (energy/time) is significant – approximately one kilowatt per tubulin per Orch OR event.

8. Orch OR, cognition and free will

Quantum computation with objective reduction (Orch OR) is potentially applicable to cognitive activities. While classical neural-level computation can provide a partial explanation, the Orch OR model allows far greater information capacity, and addresses issues of conscious experience, binding, and non-computability consistent with free will. Functions like face recognition and volitional choice may require a series of conscious events arriving at intermediate solutions. For the purpose of illustration consider single Orch OR events in these two types of cognitive activities.

Imagine you briefly see a familiar woman’s face. Is she Amy, Betty, or Carol? Possibilities may superpose in a quantum computation. For example during 25 milliseconds of pre-conscious processing, quantum computation occurs with information (Amy, Betty, Carol) in the form of “qubits”3/4superposed states of microtubule tubulin subunits within groups of neurons. As threshold for objective reduction is reached, an instantaneous conscious event occurs. The superposed tubulin qubits reduce to definite states, becoming bits. Now, you recognize that she is Carol! (an immense number of possibilities could be superposed in a human brain’s 1019 tubulins).

Figure 29. Face recognition. A familiar face induces superposition (left) of three possible solutions (Amy, Betty, Carol) which “collapse” to the correct answer Carol (right). Volitional choice. Three possible dinner selections (shrimp, sushi, pasta) are considered in superposition (left), and collapse via Orch OR to choice of sushi (right).


In a volitional act possible choices may be superposed. Suppose for example you are selecting dinner from a menu. During pre-conscious processing, shrimp, sushi and pasta are superposed in a quantum computation. As threshold for objective reduction is reached, the quantum state reduces to a single classical state. A choice is made. You’ll have sushi!

How does the choice actually occur? In a conventional neural network scheme, the selection criteria can be described by a deterministic algorithm which precludes the possibility of free will. The non-computable influence in Orch OR may be useful in understanding free will.

The problem in understanding free will is that our actions seem neither totally deterministic nor random (probabilistic). What else is there in nature? As previously described, in OR (and Orch OR) the reduction outcomes are neither deterministic nor probabilistic, but involve a factor which is “non-computable.” The microtubule quantum superposition evolves linearly (analogous to a quantum computer) but is influenced at the instant of collapse by hidden non-local variables (quantum-mathematical logic inherent in fundamental spacetime geometry). The possible outcomes are limited, or probabilities set (“orchestrated”), by neurobiological feedback (in particular microtubule associated proteins, or MAPs). The precise outcome3/4our free will actions3/4are chosen by effects of the hidden logic on the quantum system poised at the edge of objective reduction.

Figure 30. Free will may be seen as the result of deterministic processes (behavior of trained robot windsurfer) acted on repeatedly by non-computable influences, here represented as a seemingly capricious wind.


Consider a sailboard analogy for free will. A sailor sets the sail in a certain way; the direction the board sails is determined by the action of the wind on the sail. Let’s pretend the sailor is a non-conscious robot zombie run by a quantum computer which is trained and programmed to sail. Setting and adjusting of the sail, sensing the wind and position, jibing and tacking (turning the board) are algorithmic and deterministic, and may be analogous to the pre-conscious, quantum computing phase of Orch OR. The direction and intensity of the wind (seemingly capricious, or unpredictable) may be considered analogous to Planck scale hidden non-local variables (e.g. “Platonic” quantum-mathematical logic inherent in space-time geometry). The choice, or outcome (the direction the boat sails, the point on shore it lands) depends on the deterministic sail settings acted on repeatedly by the apparently unpredictable wind. Our “free will” actions could be the net result of deterministic processes acted on by hidden quantum logic at each Orch OR event. This can explain why we generally do things in an orderly, deterministic fashion, but occasionally our actions or thoughts are surprising, even to ourselves.

9. Consciousness and evolution

When in the course of evolution did consciousness first appear? Are all living organisms conscious, or did consciousness emerge more recently, e.g. with language or toolmaking? Or did consciousness appear somewhere in between, and if so, when and why? The Orch OR model (unlike other models of consciousness) is able to make a prediction as to the onset of consciousness. Based on E = h/T we can ask, for example, is it feasible for single cell organisms such as paramecium (which exhibit complex behavior such as graceful swimming, mating and learning) to be conscious? Single cells including paramecium should contain approximately 107 tubulins, so T would be 50,000 msec, or nearly one minute. This seems unlikely. Larger organisms such as the nematode worm (e.g., C. elegans) with 300 neurons (3 x 109 tubulins) would need to maintain quantum isolation for only 133 msec – not unreasonable. Such organisms (tiny worms and urchins) were prevalent at the beginning of the “Cambrian explosion,” a burst of evolution which occurred 540 million years ago. Did primitive consciousness (via Orch OR) accelerate evolution and precipitate the Cambrian explosion?

Figure 31. A time-line of when consciousness could have arisen.

The Cambrian explosion was a burst of evolution 540 million years ago. Organisms present at the Cambrian onset included small worms and urchins. Did consciousness (Orch OR) cause the Cambrian explosion?

Figure 32. Organisms present at the early Cambrian explosion (e.g. tiny urchins, worms and suctorians) are the right size for primitive consciousness by Orch OR.

Figure 33. Actinosphaerium is a tiny urchin like those present at the early Cambrian explosion. Each has about one hundred rigid axonemes about 300 microns long, made up of a total of about 3 x 109 tubulins (with permission from L.E. Roth).

Figure 34. Cross-section of single axoneme of actinosphaerium – a double spiral array of interconnected microtubules. Scale bar: 500 nm (with permission from L.E. Roth).


Would consciousness be advantageous to survival (above and beyond intelligent, complex behavior)? It seems that, yes, consciousness would indeed be advantageous to survival, and hence capable of accelerating evolution. Non-computable behavior (unpredictability, intuitive actions) would be beneficial in predator-prey relations. Having conscious experience of taste would promote finding food; the experience of pain would promote avoiding predators. And the pleasurable qualia of sex would promote reproduction.

So “what is it like to be a worm?” Lacking our sensory apparatus, associative memory and complex nervous system such primitive consciousness would be a mere glimmer, a disjointed smudge of reality. But qualitatively, at a basic level, such primitive consciousness would be akin to ours.

What about future evolution? Will consciousness occur in computers? The advent of quantum computers opens the possibility, however as presently envisioned quantum computers will have insufficient mass in superposition (e.g. electrons) to reach threshold for objective reduction. Instead, superpositions will be disrupted by environmental decoherence. Conceivably, future generations of quantum computers could satisfy requirements for objective reduction and consciousness.

10. Conclusions
  • Brain processes relevant to consciousness extend downward within neurons to the level of cytoskeletal microtubules.
  • An explanation for conscious experience requires (in addition to neuroscience and psychology) a modern form of pan-protopsychism in which proto-conscious qualia are embedded in the basic level of reality, as described by modern physics.
  • Roger Penrose’s physics of objective reduction (OR) connects brain structures to fundamental reality, leading to the Penrose-Hameroff model of quantum computation with objective reduction in microtubules (orchestrated objective reduction: Orch OR).
  • The Orch OR model is consistent with known neurophysiological processes, generates testable predictions, and is the type of fundamental, multi-level, interdisciplinary theory which may account for the mind’s enigmatic features.

http://www.quantumconsciousness.org/presentations/whatisconsciousness.html

Light Fantastic

Leave a comment

 

 

 

The Light of Stuff

Leave a comment

220px-Maxwells_demon

The first permanent colour photograph,

taken by James Clark Maxwell in 1861
Fig. 1-3

Fig. 1-3 (Photo credit: Wikipedia)


1 Comment

Does Quantum Physics Make it Easier to Believe in God?

Does Quantum Physics Make It Easier to Believe in God?

Not in any direct way. That is, it doesn’t provide an argument for the existence of God.  But it does so indirectly, by providing an argument against the philosophy called materialism (or “physicalism”), which is the main intellectual opponent of belief in God in today’s world.

Materialism is an atheistic philosophy that says that all of reality is reducible to matter and its interactions. It has gained ground because many people think that it’s supported by science. They think that physics has shown the material world to be a closed system of cause and effect, sealed off from the influence of any non-physical realities — if any there be. Since our minds and thoughts obviously do affect the physical world, it would follow that they are themselves merely physical phenomena. No room for a spiritual soul or free will: for materialists we are just “machines made of meat.”

Quantum mechanics, however, throws a monkey wrench into this simple mechanical view of things.  No less a figure than Eugene Wigner, a Nobel Prize winner in physics, claimed that materialism — at least with regard to the human mind — is not “logically consistent with present quantum mechanics.” And on the basis of quantum mechanics, Sir Rudolf Peierls, another great 20th-century physicist, said, “the premise that you can describe in terms of physics the whole function of a human being … including [his] knowledge, and [his] consciousness, is untenable. There is still something missing.”

How, one might ask, can quantum mechanics have anything to say about the human mind?  Isn’t it about things that can be physically measured, such as particles and forces?  It is; but while minds cannot be measured, it is ultimately minds that do the measuring. And that, as we shall see, is a fact that cannot be ignored in trying to make sense of quantum mechanics.  If one claims that it is possible (in principle) to give a complete physical description of what goes on during a measurement — including the mind of the person who is doing the measuring — one is led into severe difficulties. This was pointed out in the 1930s by the great mathematician John von Neumann.  Though I cannot go into technicalities in an essay such as this, I will try to sketch the argument.

It all begins with the fact that quantum mechanics is inherently probabilistic. Of course, even in “classical physics” (i.e. the physics that preceded quantum mechanics and that still is adequate for many purposes) one sometimes uses probabilities; but one wouldn’t have to if one had enough information.  Quantum mechanics is radically different: it says that even if one had complete information about the state of a physical system, the laws of physics would typically only predict probabilities of future outcomes. These probabilities are encoded in something called the “wavefunction” of the system.

A familiar example of this is the idea of “half-life.”  Radioactive nuclei are liable to “decay” into smaller nuclei and other particles.  If a certain type of nucleus has a half-life of, say, an hour, it means that a nucleus of that type has a 50% chance of decaying within 1 hour, a 75% chance within two hours, and so on. The quantum mechanical equations do not (and cannot) tell you when a particular nucleus will decay, only the probability of it doing so as a function of time. This is not something peculiar to nuclei. The principles of quantum mechanics apply to all physical systems, and those principles are inherently and inescapably probabilistic.

This is where the problem begins. It is a paradoxical (but entirely logical) fact that a probability only makes sense if it is the probability of something definite. For example, to say that Jane has a 70% chance of passing the French exam only means something if at some point she takes the exam and gets a definite grade.  At that point, the probability of her passing no longer remains 70%, but suddenly jumps to 100% (if she passes) or 0% (if she fails). In other words, probabilities of events that lie in between 0 and 100% must at some point jump to 0 or 100% or else they meant nothing in the first place.

This raises a thorny issue for quantum mechanics. The master equation that governs how wavefunctions change with time (the “Schrödinger equation”) does not yield probabilities that suddenly jump to 0 or 100%, but rather ones that vary smoothly and that generally remain greater than 0 and less than 100%.  Radioactive nuclei are a good example. The Schrödinger equation says that the “survival probability” of a nucleus (i.e. the probability of its not having decayed) starts off at 100%, and then falls continuously, reaching 50% after one half-life, 25% after two half-lives, and so on — but never reaching zero. In other words, the Schrödinger equation only gives probabilities of decaying, never an actual decay! (If there were an actual decay, the survival probability should jump to 0 at that point.)

To recap: (a) Probabilities in quantum mechanics must be the probabilities of definite events. (b) When definite events happen, some probabilities should jump to 0 or 100%. However, (c) the mathematics that describes all physical processes (the Schrödinger equation) does not describe such jumps.  One begins to see how one might reach the conclusion that not everything that happens is a physical process describable by the equations of physics.

Welcome to Big Questions Online What Is It to Be Intellectually Humble?

So how do minds enter the picture?  The traditional understanding is that the “definite events” whose probabilities one calculates in quantum mechanics are the outcomes of “measurements” or “observations” (the words are used interchangeably).  If someone (traditionally called “the observer”) checks to see if, say, a nucleus has decayed (perhaps using a Geiger counter), he or she must get a definite answer: yes or no.  Obviously, at that point the probability of the nucleus having decayed (or survived) should jump to 0 or 100%, because the observer then knows the result with certainty.  This is just common sense. The probabilities assigned to events refer to someone’s state of knowledge: before I know the outcome of Jane’s exam I can only say that she has a 70% chance of passing; whereas after I know I must say either 0 or 100%.

Thus, the traditional view is that the probabilities in quantum mechanics — and hence the “wavefunction” that encodes them — refer to the state of knowledge of some “observer”.  (In the words of the famous physicist Sir James Jeans, wavefunctions are “knowledge waves.”)  An observer’s knowledge — and hence the wavefunction that encodes it — makes a discontinuous jump when he/she comes to know the outcome of a measurement (the famous “quantum jump”, traditionally called the “collapse of the wave function”). But the Schrödinger equations that describe any physical process do not give such jumps!  So something must be involved when knowledge changes besides physical processes.

An obvious question is why one needs to talk about knowledge and minds at all. Couldn’t an inanimate physical device (say, a Geiger counter) carry out a “measurement”?  That would run into the very problem pointed out by von Neumann: If the “observer” were just a purely physical entity, such as a Geiger counter, one could in principle write down a bigger wavefunction that described not only the thing being measured but also the observer. And, when calculated with the Schrödinger equation, that bigger wave function would not jump! Again: as long as only purely physical entities are involved, they are governed by an equation that says that the probabilities don’t jump.

That’s why, when Peierls was asked whether a machine could be an “observer,” he said no, explaining that “the quantum mechanical description is in terms of knowledge, and knowledge requires somebody who knows.” Not a purely physical thing, but a mind.

But what if one refuses to accept this conclusion, and maintains that only physical entities exist and that all observers and their minds are entirely describable by the equations of physics? Then the quantum probabilities remain in limbo, not 0 and 100% (in general) but hovering somewhere in between. They never get resolved into unique and definite outcomes, but somehow all possibilities remain always in play. One would thus be forced into what is called the “Many Worlds Interpretation” (MWI) of quantum mechanics.

In MWI, reality is divided into many branches corresponding to all the possible outcomes of all physical situations. If a probability was 70% before a measurement, it doesn’t jump to 0 or 100%; it stays 70% after the measurement, because in 70% of the branches there’s one result and in 30% there’s the other result! For example, in some branches of reality a particular nucleus has decayed — and “you” observe that it has, while in other branches it has not decayed — and “you” observe that it has not. (There are versions of “you” in every branch.) In the Many Worlds picture, you exist in a virtually infinite number of versions: in some branches of reality you are reading this article, in others you are asleep in bed, in others you have never been born. Even proponents of the Many Worlds idea admit that it sounds crazy and strains credulity.

The upshot is this: If the mathematics of quantum mechanics is right (as most fundamental physicists believe), and if materialism is right, one is forced to accept the Many Worlds Interpretation of quantum mechanics. And that is awfully heavy baggage for materialism to carry.

If, on the other hand, we accept the more traditional understanding of quantum mechanics that goes back to von Neumann, one is led by its logic (as Wigner and Peierls were) to the conclusion that not everything is just matter in motion, and that in particular there is something about the human mind that transcends matter and its laws.  It then becomes possible to take seriously certain questions that materialism had ruled out of court: If the human mind transcends matter to some extent, could there not exist minds that transcend the physical universe altogether? And might there not even exist an ultimate Mind?

https://www.bigquestionsonline.com/content/does-quantum-physics-make-it-easier-believe-god