Integral Thinking in Cutting-Edge Neurophysiology
Although the apparent confirmation of the Higgs Boson, the so-called God particle, has been attracting attention recently, the most vexing problem in science and philosophy remains the mind-body problem: What relation is there between material brain states and conscious, first-person experience? In the past few years, as we shall see in a moment, some neurosciences have now arrived at an answer that was anticipated by Ken Wilber’s version of integral theory. According to Wilber, meager versions of interiority—the antecedents of consciousness—are found at the atomic level, as Alfred North Whitehead suggested in the early 20th century.
For much of the 20th century, however, in part because of the enormous influence of behaviorism, consciousness was not even considered a fit topic for natural and social science. Until relatively recently, natural science maintained that consciousness is a late arriving, highly improbable, and accidental phenomenon belonging solely to humans. Animals were not considered conscious. Possession of self-consciousness, a trait that humans have developed to an exceptional degree, is regarded as at best a mixed blessing. While making possible knowledge that allows for some control of nature, consciousness also makes humans aware of their mortality and eventually of their absurdity in a godless universe, bereft of any significance apart from the pathetic prattling of utterly insignificant humans on a tiny planet in the middle of nowhere. In the face of this nihilistic view, we are encouraged to keep a stiff upper lip while leading a life “as if” it really meant anything.
In the past few decades, however, many neurophysiologists have concluded that we can infer that any organism with sufficient neural complexity has some measure of consciousness. Many researchers now believe that no account of human “mind” could be complete without explaining the nature and possibility of first-person experience. This re-awakened interest in consciousness occurred in the context of narratives about cosmic evolution from its birth in the Big Bang. According to the so-called anthropic principle (better put, the life principle), organic life could have evolved only if the basic laws of the universe were extraordinarily finely tuned to be life friendly. Holmes Rolston III has written that if the first Big Bang was the explosion from which space-time and matter-energy emerged, and if the second Big Bang was the emergence of organic life, then the third Big Bang was the development of consciousness. More than a few respected scientists and philosophers maintain that perhaps it is no accident that self-conscious life evolved; indeed, perhaps the universe has become conscious of itself through humankind.
Consciousness may be an emergent phenomenon that showed up 12 billion years after the first Big Bang. Yet, an even more striking possibility is that proto-consciousness came into being along with other basic cosmic constituents shortly after the Big Bang occurred. Consciousness would then not be an accidental “add on” that never quite fits in a material universe, but instead would be a primary feature of the universe that occurs at all levels of reality, right down to that of quarks.
One of the most significant recent contributions to this view of consciousness has been made by Christof Koch and Guilio Tononi. In Consciousness: Confessions of a Romantic Reductionist (MIT Press, 2012), Koch lays out the elements of their “integrated information theory “ (ITT) of consciousness. Koch, a professor of biology and of engineering at Cal Tech, and chief science officer at the recently established Allen Institute for Brain Science in Seattle, is a world-renowned neuroscientist. For almost twenty years, Koch worked closely with Nobel laureate Francis Crick on the problem of consciousness. Despite coming up with one or another well-grounded account of what neurophysiological structures and functions were involved in generating consciousness, Koch and Crick were not able to explain exactly how the magic happened, that is, how first-person conscious experiences arose with or were correlated with those complex structures and functions.
After Crick’s death, Koch began with working with Tononi, another brilliant brain-consciousness researcher, who postulated that information theory could shed light on consciousness. In their accessible co-authored essay, “Can Machines Be Conscious?” Koch and Tononi write: “Information is classically defined as the reduction of uncertainty that occurs when one among many possible outcomes is chosen.” Relatively simple systems can be in an astonishing number of states, but such systems do not achieve consciousness because those states are not integrated. “According to IIT, consciousness implies the availability of a large repertoire of states belonging to a single integrated system. To be useful, those internal states should also be highly informative about the world.” Achieving high levels of integration in neural networks is difficult.1 “The more integrated and differentiated the system is, the more conscious it is.”2 (128)
ITT “not only specifies the amount of consciousness, Φ, associated with each state of a system. It also captures the unique quality of that experience.” Hence, “A nervous network in any one particular state has an associated [correlated] shape in qualia [experiential] space.” For humans, the neural network state and the correlated experiential state are extraordinarily complex, as they would have to be in order to account for the manifold ways in which people can be conscious. Koch uses the term “crystal” to describe a physical system that is “mapped onto a shape in this fantastically multidimensional qualia space.” Each conscious experience involves its own topology, which allows for different experiences: seeing green vs. seeing red. (130)
Although correlated with neural (that is, material) states, consciousness is not reducible without remainder to such states. Eliminative materialism is the term of art for the kind of reductionism that says there are only brain states and thus that consciousness is nothing but brain states. In contrast, Koch adheres to a sophisticated version of what philosopher David Chalmers has called the dual-aspect theory of reality. There are material phenomena and conscious phenomena, neither of which can be reduced to the other, although they are closely correlated. Corresponding to the mathematical complexity of the material system is the geometrical complexity of the experiencing crystal. “The crystal is the system viewed from within. It is the voice in the head, the light inside the skull.” (130)
“The [experiential] crystal is not the same as the underlying network of mechanistic, causal interactions, for the former is phenomenal experience whereas the latter is a material thing. [ITT] postulates two sorts of properties in the universe that can’t be reduced to each other—the mental and the physical. They are linked by way of a simple yet sophisticated law, the mathematics of integrated information. (130)”Christof Koch
According to Koch, this law will make possible development of a “consciousness-meter.”
“This gadget takes the wiring diagram of any system of interacting components, be it wet biological circuits or those etched in silicon, to assess the size of that system’s conscious repertoire. The consciousness-meter scans the network’s physical circuitry, reading out its activity level to compute Φ and the crystal shape of the qualia that the network is momentarily experiencing. A geometrical calculus will need to be developed to determine whether the crystal has the morphology of a painfully stubbed toe or of the scent of a rose under a full moon. (131)”Christof Koch
As indicated by his reference above to circuits in wetware or silicon, Koch adheres to a kind of functionalism with regard to consciousness. That is, what counts is not what the system is made of, but whether it functions in a way that makes possible integrated information. Always arising with such integrated information is some measure of interiority. That is to say, the universe is constituted by a hierarchy of integrated systems that not only have an exterior but an interior as well. The universe is conscious—that is, has some measure of experience or interiority–all the way down. According to Koch,
“Any system whose functional connectivity and architecture yield a Φ value greater than zero has at least a trifle of experience. This holds not only for the biochemical and molecular structures of organic cells, but “also encompasses electronic networks made out of solid-state devices and copper wires.”Christof Koch
No matter what a thing is composed of, whether it is an organism or rolls on wheels:
“If it has both differentiated and integrated states of information, it feels like something to be such a system; it has an interior perspective. The complexity and dimensionality of their associated phenomenal experiences might differ vastly, but each one has its own crystal [interior] shape. Even simple matter has a modicum of Φ. Protons and neutrons consist of a triad of quarks that are never observed in isolation. They constitute an infinitesimal integrated system.” (131, 132)Christof Koch
With the rise of digital artifacts in the past few decades, vast numbers of low-level centers of interiority were added to that of trillions of organisms on planet Earth. When isolated computers and smart phones are tied together in the Internet, the level of integrated complexity attained suggests that the Internet is already conscious at some level. Koch and Tononi are confident that humans will eventually be able to create conscious artificial intelligence (AI), although AI consciousness will not necessarily have all the features associated with and required by the human form of consciousness.
In his book, Koch bravely steps out in a way rarely done by many neuroscientists. He asserts that he adheres to a version of panpsychism, because he holds that consciousness is “a fundamental feature of the universe, rather than emerging out of simpler elements….” (132) Koch praises Pierre Teilhard de Chardin, a name that rarely appears in the context of neurophysiology (!), for having affirmed a version of panpsychism in his famous book, The Phenomenon of Man. The evolution of humankind makes possible the rise of the noosphere, a new layer of reality that covers planet Earth with “incandescence,” Teilhard writes. Koch takes seriously Teilhard’s speculation that the Omega Point will be achieved “when the universe becomes aware of itself by maximizing its complexity, its synergy.” (134) At this point, Koch notes that ITT goes beyond panpsychism in attempting to specify the causal processes involved in integrating information. Moreover, he concedes that ITT has a long way to go before being considered a “final” theory of consciousness, but it is a good start in that direction.
Koch is a reductionist because he believes that science will ultimately comprehend what gives rise to consciousness and will be able to create machines that are conscious. Consciousness does not come from some otherworldly source. He is a romantic reductionist because of his on-going interest in the spiritual dimensions of reality. He writes: “I’m optimistic that science is poised fully to comprehend the mind-body problem. To paraphrase from Corinthians: ‘For now we see through a laboratory darkly, but then we shall know.’”
Raised Roman Catholic, Koch took his faith seriously for many years, although he finally abandoned it because he could not square Christianity’s mythic content with scientific knowledge. In the final chapter of his book, Koch respectfully explores the limitations of Biblical religion and theism in general, but he is unwilling to surrender his surmise that there is something profound at work in the cosmos, something signifying more than the intense interactions of matter-energy. In effect, Koch is an his way to being an integral theorist. In affirming the interiority of all levels of reality, he wants to leave open the possibility of a mysterious depth to the origins and consequences of the universe. Just before concluding his book with a psalm from the Dead Sea Scrolls, Koch waxes philosophically:
“I do believe that some deep and elemental organizing principle created the universe and set it in motion for a purpose I cannot comprehend. I grew up calling this entity God. It is much closer to Spinoza’s God than to the God of Michelangelo’s painting. The mystic Angelius Silesius, a contemporary of Descartes, captures the paradoxical essence of the self-caused Prime Mover as “Gott ist ein lauter Nichts, ihn rührt kein Nun noch Hier.” (God is a lucent nothing, no Now nor Here can touch him.)”Christof Koch
Koch’s religious roots must have played some role in his openness to developing a sophisticated version of panpsychism. Nevertheless, he arrived at this position by way of lengthy, careful scientific research. This fact indicates that the integral Zeitgeist is gaining in influence, if by “integral thinking” we mean the view that consciousness starts all the way down and then proceeds to go all the way up via cosmic evolution.
The title of this essay promises a lot: There is consciousness all the way down. Koch has not proven this, as he is well aware. In Part Two of this post, I take a more critical look at current developments in AI and consciousness research. In his excellent new book, You Are Not a Gadget, Jaron Lanier—a key player in development of virtual reality in the 1980s—warns that researchers in Silicon Valley are redefining AI in a way that both changes and lowers the bar for what counts as machine “consciousness.” In the process, those gurus are encouraging us to lower our own mental capacities to comply with what all those seductive (but limited) gadgets can do. According to Lanier, leaders in Silicon Valley are using their profits not merely to line their pockets or to invest in better digital consumer goods, but primarily to enable the emergence of a self-conscious Internet via hive mind or to bring about some other form of AI that goes well beyond human intelligence. Here, we may ponder the adage: Be careful of what you wish for!
But does consciousness go all the way up?
I described the panpsychist views espoused in Consciousness: Confessions of a Romantic Reductionist, by world-renowned brain scientist, Christof Koch. According to Koch, consciousness may go “all the way down,” constituting a basic feature of the universe, rather than being a rare and inexplicable occurrence.
In this section, I consider some of the challenges posed by attempts to create artificial intelligence (AI) as well as some of the dangers inherent in comparing human consciousness with AI. According to Ray Kurzweil and transhumanists who promote the coming “Singularity,” within a few decades AI will become self-conscious and will then redesign itself to be vastly more intelligent than humankind. The significance of such an eventuality is difficult to overestimate. If AI does constitute the future of consciousness, then AI may “go all the way up,” in the sense of expanding itself in ways that are to us incomprehensible to us, but that may end up transforming the nature of the universe. However intriguing it is to contemplate such eventualities, they could not occur without taking the first step: creating conscious AI. Will this ever happen?
In their co-authored essay, “Can Machines be Conscious?” Koch and his colleague Giulio Tononi believe that the answer is “Yes,” though “perhaps not in the way that the most popular scenarios have envisioned it.”3 Insofar as consciousness is a natural phenomenon, it can be explained in terms of the laws of physics, chemistry, biology, and other sciences. [From an integral perspective, many other perspectives would be needed to model and create conscious AI. To do so, we would need to develop Integral Robotics, which will be the subject of my next post.] For the brain/mind complex to work, certain factors must be in place. The cerebral cortex must be operative for people to experience particular events, such as a scent or a memory. Consciousness also requires that the cortex and thalamus are operative, and that the two brain hemispheres be independently equipped with whatever is needed for consciousness. Damage to any of these areas can lead either to temporary or to permanent loss of consciousness. Some important areas of the brain, such as the cerebellum, which is densely packed with neurons, are not needed for consciousness.
Based upon a limited, but growing understanding of how the brain works, Koch and Tononi make the following claim: “Remarkably, consciousness does not seem to require many of the things we associate most deeply with being human: emotions, memory, self-reflection, language, sensing the world, and acting on it.” For instance, while awake we are sensing the world, but when asleep our sensory “inputs” arise almost exclusively from the brain itself, not from the world outside. Humans rely on emotions when making decisions, but AI may or may not be endowed with such emotions. An emotionless AI could still be conscious. Surprisingly, according to experimental evidence, being conscious does not even require being attentive. Being conscious does not even require memory, as indicated by the well-known case of a man named H.M., who cannot acquire new memories after a surgery for epilepsy many years ago, but who is evidently conscious. Likewise for self-reflection: We can be deeply absorbed in activities that require us to be conscious, without our being reflectively aware that we are engaged in such activities. Even after losing language, people may still be regarded as conscious.
What, then, is required for a person or AI to be called “conscious”? According to Koch and Tononi, complexly structured and integrated information is required. We cannot yet adequately theorize, much less construct the neural architecture needed to make such integration possible, but inroads are being made on this daunting problem.
Yet, even if we could devise an artificial information complex that seems sufficient to support and/or to be correlated with consciousness, how would we know that the this complex had achieved consciousness? The prevailing standard is the Turing test, devised by the brilliant computer scientist Alan Turing in the mid-twentieth century. A machine would be called conscious, so Turing maintained, if it could successfully converse—via a computer interface—with a human being. The human being would judge whether his/her interlocutor successfully conversed in a human-like way.
Koch and Tononi propose a much more demanding test: Ask the allegedly conscious AI machine to interpret a photograph, the significance of which would be immediately obvious to an adult human. The example that Koch and Tononi provide is a photo of a man pointing a gun at the clerk of a liquor store. Adults would know that the photo depicts a stick-up and could infer its general location by noticing that products in the photo have English language labels. No machine is yet even remotely capable of attaining such a comprehension of images and their contexts.4
There are two alternatives to creating AI. One way is to precisely emulate the human brain, but the obstacles here appear so enormous that Koch and Tononi recommend the other way, namely, to build machines based on what we know of the structure of mammalian brains, and to let those machines evolve in trial and error processes over time. Although we have a long way to go before we are capable of creating conscious machines, Koch and Tononi remain optimistic about our prospects:
“Contemplating how to build such a machine will inevitably shed light on scientists’ understanding of our own consciousness. And just as we ourselves have evolved to experience and appreciate the infinite richness of the world, so too will we evolve constructs that share with us and other sentient animals the most ineffable, the most subjective of all features of life: consciousness itself.”Christof Koch
Koch and Tononi’s warning that we are as yet a long way from creating AI harmonizes in some ways with the point of view adopted by Jaron Lanier in You Are Not a Gadget: A Manifesto. Lanier, recently the subject of a fascinating article in The New Yorker, was a pioneer in virtual reality research in the 1980s and remains a leading figure in the digital world.5 His book’s title indicates his critical position: Innovations such as smart phones, digital tablets, and social media are so seductive that people increasingly—and unwisely—use them as models or measures for human intelligence, creativity, and consciousness. Instead of allowing our marvelous and quirky individual traits to flower, as (allegedly) encouraged by earlier forms of digital communication, social media in particular invite a certain conformism:
“An endless series of gambits baked by gigantic investments encouraged young people entering the online world for the first time to create standardized presences on sites like Facebook. Commercial interests promoted the widespread adoption of standardized designs like the blog, and these designs encouraged psuedonymity in at least some aspects of their design, such as comments, instead of the proud extroversion that characterized the first wave of web culture.
Instead of people being treated as the sources of their own creativity, commercial aggregation and abstraction sites presented anonymized fragments of creativity as products that might have fallen from the sky or been dug up from the ground, obscuring their true sources.6“Jared Lanier
To those who celebrate anonymity, standardization (and its consequent blandness), uniformity, crowd-sourcing, the mash-up, and the digitized reduction of all extant literature to the endlessly accessible One Word, Lanier offers a stark warning in his book: “Spirituality is committing suicide. Consciousness is attempting to will itself out of existence.” (20) What he means by “spirituality” seems to have more to do with consciousness and personality than what spirituality means in religious traditions. Nevertheless, the concern he expresses is clear enough. Defying “the latest techno-political-cultural orthodoxy,” Lanier writes that the supposedly “true path to a better world” is in fact “strongly biased toward an antihuman way of thinking.” (22) He calls our attention to the headlong and thus uncritical embrace of new information technologies, the “gadgets” that Lanier both admires and regards with suspicion.
As a Silicon Valley insider, Lanier is in a position to understand the goals that motivate many people in that world. He reports about such goals in his often-cited New York Times op-ed piece, “The First Church of Robotics.”7 With all the rhetoric coming out about how smart and even autonomous machines are becoming, Lanier cautions, we are beginning to measure ourselves against “smart” devices rather than measuring them in terms of human capacities. Doing the latter sort of measuring, as Koch and Tononi indicate, would show how very far machines have to go before they can be fairly labeled conscious. Increasingly, many of us allow algorithms to form our aesthetic judgments, as when we follow the recommendations made by Amazon about books or Fandango about films. Lanier notes that the designers at Apple and other major digital empires are not yet prepared to replace their own intuitive assessments—bolstered by decades of experience—with recommendations derived from such algorithms. Lanier calls on us to regard digital technologies as passive tools rather than as incipient people.
Although still lacking an adequate metaphysical understanding of personhood or even of consciousness, Silicon Valley experts are pushing to create AI as an expression of their post-human, “ultramodern” religion: the First Church of Robotics. Our purchase of the latest digital gadgets funds what is really important to the high priests of computing: artificial intelligence. Lanier observes:
“The influential Silicon Valley institution preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined; it will become alive in the blink of an eye, and take over the world before humans even realize what’s happening.
Some think the newly sentient Internet would then choose to kill us; others think it would be generous and digitize us the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain. Yes, this sounds like many different science fiction movies. Yes, it sounds nutty when stated so bluntly. But these are ideas with tremendous currency in Silicon Valley; these are guiding principles, not just amusements, for many of the most influential technologists. [My emphasis.]”Jared Lanier, emphasis added
Humans may eventually invent AI that will far surpass our capacities in certain respects, including sheer intelligence. Human intelligence, however, is closely intertwined with our bodies, our emotions, our psychological makeup, and our cultural contexts. The parameters of first-person human experience have been sculpted by millions of years of mammalian evolution. In The Future of the Body Esalen co-founder Michael Murphy shows the extraordinary and often unappreciated capacities of the human body, which is inextricably bound up with a crucial bodily organ: the brain. Such corporeal capacities are typically not on the checklist of factors to be included in AI. Missing as well from such checklists are references to the subtle and causal bodies, which are experienced by advanced practitioners from many different spiritual traditions. Finally, will even “advanced” AI be capable of the kind if non-duality that is regarded as the high point of human existence by spiritual practitioners? Conscious AI may “go all the way up” to astonishing levels of intellectual achievement, but what other capacities will be left behind or be underdeveloped because AI’s creators did not think of them as important in the first place? It is often said that as wisdom grows, so does compassion. There is no necessary relationship, however, between intelligence and wisdom. This is what led Isaac Asimov to posit his “three laws of robotics,” aimed primarily at preventing robots from harming human beings. Unless designers and programmers give considerably more thought to what constitutes consciousness—including its embodied and emotional components—there may be good reasons to fear the rise of post-human artificial intelligence.
2 According to Koch, “Leibniz would have been very comfortable with integrated information.” (131) Leibniz, the 17th century German polymath and co-inventor (with Newton) of the calculus, postulated that the world is constituted by complex and integrated matrices of monads, or centers of experience. In human beings, a dominant monad integrates the contributions of countless other monads operating at various levels.
3 Christof Koch and Giulio Tononi, “Can Machines be Conscious? Yes—and a new Turing test might prove it.” IEEE Spectrum, Special Report: The Singularity. 2008. http://spectrum.ieee.org/biomedical/imaging/can-machines-be-conscious
4 For a very informative update about the current state of AGI (artificial general intelligence, another term than artificial intelligence), see David Deutsch, “Creative Blocks,” Aeon, October 3, 2012. http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/
5 Jennifer Kahn, “The Visionary: A digital pioneer questions what technology has wrought,” The New Yorker, July 11, 2011. http://www.newyorker.com/reporting/2011/07/11/110711fa_fact_kahn
6 Jaron Lanier, You Are Not a Gadget: A Manifesto (New York: Alfred A. Knopf, 2010), 16.
About Michael Zimmerman
Michael E. Zimmerman is professor of philosophy at the University of Colorado at Boulder. He has written several essays about integral theory, with special emphasis on integral ecology Michael Zimmerman and Sean Esbjorn-Hargens have co-authored a book called Integral Ecology.