The Future of Artificial Intelligence

Ken WilberCognitive, Futurism, Perspectives, Premium, Presentations, Science & Technology, Video Leave a Comment

We live in fascinating times. For decades we have seen an explosive exponential growth of technology, and the effects of this growth are only now beginning to surface. As a result, what seemed like science fiction even just a few years ago is rapidly becoming reality. Particularly when it comes to artificial intelligence, which has recently hit a new level of sophistication and usability, as seen in highly capable “digital assistants” like Siri, Cortana, and Google Now.

It is an age of technological miracles, and the repercussions for the future are only beginning to make themselves known.

As artificial intelligence becomes ever more ubiquitous in our lives, some of our most respected scientists, engineers, and philosophers are beginning to caution us about the possible consequences of this still-fledgling technology. Stephen Hawking, Bill Gates, and Elon Musk recently warned us about the possible militarization of A.I., which threatens to send us spiraling into the most horrifying and destructive arms race the world has ever seen — think less Siri, more Skynet.

This is by no means a new concern, of course. Isaac Asimov predicted this dilemma way back in 1942 with his famous “Three Laws of Robotics”, which attempted to get ahead of the problem by formulating a set of logical parameters for rational ethical behavior that could be programmed into any artificial intelligence:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

And, of course, the preceding “zeroth law”

0.  A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

But it’s one thing to program a machine with human values — what happens if the machines begin programming themselves and formulating their own values? Would those values grow through similar stages that human values grow through? More importantly, do we have any reason to believe that we would play any meaninful role in those values?

Here’s the problem. Human intelligence, creativity, values, and consciousness didn’t suddenly materialize out of a vacuum. Rather, it is the result of an ongoing 14-billion year experiment, all driven by a single, fairly simple evolutionary mechanism: “Transcend and include.”

  • Atoms are transcended and included by molecules,
  • Molecules are transcended and included by cells,
  • Cells are transcended and included by multicellular organisms,
  • Multicellular organisms are transcended and included by simple nervous systems,
  • Simple nervous systems are transcended and included by the reptilian brainstem,
  • Reptilian brainstems are transcended and included by the mammalian limbic system,
  • The mammalian limbic system is transcended and included by the human neocortex.

As ever-more complex forms of matter are produced by evolution, ever-greater forms of consciousness and intelligence emerge with each and every step. Which means that your intelligence rests upon and resides within an incredibly long chain of evolutionary components — quarks, atoms, molecules, cells, nervous systems, etc., a magnificent legacy of intelligent emergence that stretches all the way back to the Big Bang.

So organic intelligence is something like “looking through a glass onion” — each layer of the onion possesses it’s own degree of prehensive consciousness, each contributes to the totality we experience as “consciousness” and “intelligence”. You are not just one layer of the onion; you are the entire onion.

Our efforts to create artificial intelligence, meanwhile, are taking a completely different approach. We are essentially trying to reproduce the intelligence of the highest, outer-most layers of the onion, using only the materials from the lowest, most basic layers. We are attempting to replicate human-equivalent consciousness out of atoms and molecules alone, while skipping all the wet squishy stuff of biological evolution.

This attempt to skim the outer-most layer of human intelligence may actually be one of the reasons artificial intelligence can excel in specific human behaviors such as playing chess, giving directions, or driving cars, but fails abysmally when it comes to simple tasks like locating and picking a paper clip up from the floor. As it turns out, what we thought were the “hard” problems have tended to be the easiest to solve, and the “easy” problems have been by far the hardest. Computers can easily defeat the world’s greatest chess champions, but cannot come even close to the basic adaptive problem-solving skills possessed by toddlers.

Even if we were to succeed in creating a new type of intelligence, this intelligence would, by it’s very nature, be completely alien to us. Human beings are capable of an incredible amount of connection and compassion for other forms of intelligence on this planet, particularly since we share so much of our own morphogenetic heritage with every other creature. An artificially intelligent agent, on the other hand, would share no evolutionary heredity with us or anything else in existence, having skipped biological development altogether. What could we possibly hope to have in common with those entities?

So when it comes to the future of artificial intelligence, we seem to have more questions than answers. Are atoms, molecules, and mathematics alone enough to produce machines with genuine human-equivalent intelligence? Can that intelligence ever become truly conscious and possess the “inner light” of interior self-awareness? Will artificial intelligence be capable of determining its own morals, ethics, and values? Will those values transcend and include the continued existence of the human race, or will this intelligence share so little resonance with us that our very survival could be threatened?

And who will ultimately end up winning the future: the cyborgs or the androids?

What do you think? Let us know in the comments below!

Image: Metropolis Light / Metropolis Dark by Android Jones [view gallery]
Written by Corey W. deVos

Ken Wilber

About Ken Wilber

Ken Wilber is a preeminent scholar of the Integral stage of human development. He is an internationally acknowledged leader, founder of Integral Institute, and co-founder of Integral Life. Ken is the originator of arguably the first truly comprehensive or integrative world philosophy, aptly named “Integral Theory”.

Leave a Comment