Loving AIs: Bringing Unconditional Love to Artificial General Intelligence

Julia Mossbridge Conversations, Futurism, Perspectives, Presentations, Science & Technology, Video 0 Comments

Among the thousands of amazing breakthroughs and the blistering pace of change these days–from colonies on Mars to genetic splicing in your kitchen–there is, perhaps, no more mind boggling example of how fascinating and complicated our brave new world has become than that of the work on artificial general intelligence (AGI). Though it is a field that goes back decades, recent advances in computing power and cognitive computing strategies have led some to think that we’re just a few decades away from the first fully-functional, environmentally-responsive, self-aware learning machines. That is, unlike the “narrow AI” of today that does one thing well (e.g., detect fraud on your credit card), these general AIs might be the synthetic intelligences we’ve seen for years in movies.

But will they be truly self-aware (i.e., will they have subjectivity)? And will they have positive care and regard for humans (or could they become Skynet)?

As part of our work on the Innovation Lab advisory board of the Institute of Noetic Sciences, I had the pleasure of sitting down with OpenCog founder and leading AGI researcher Ben Goertzel and IONS Innovation Lab director Julia Mossbridge to discuss LOVING AIs, a project aiming to design and develop an “unconditional loving” module for AGIs. In this interview, I dive into the rabbit hole of AGI with Ben and Julia to discuss the state of the field, what it would mean to program unconditional love into AGIs, and some thorny implications for the brave new world we’re entering.

Community Reflections

 
The Zero Law

by Corey deVos
Excerpted from The Future of Artificial Intelligence

We live in fascinating times. For decades we have seen an explosive exponential growth of technology, and the effects of this growth are only now beginning to surface. As a result, what seemed like science fiction even just a few years ago is rapidly becoming reality. Particularly when it comes to artificial intelligence, which has recently hit a new level of sophistication and usability, as seen in highly capable “digital assistants” like Siri, Cortana, and Google Now.

It is an age of technological miracles, and the repercussions for the future are only beginning to make themselves known.

As artificial intelligence becomes ever more ubiquitous in our lives, some of our most respected scientists, engineers, and philosophers are beginning to caution us about the possible consequences of this still-fledgling technology. Stephen Hawking, Bill Gates, and Elon Musk recently warned us about the possible militarization of A.I., which threatens to send us spiraling into the most horrifying and destructive arms race the world has ever seen — think less Siri, more Skynet.

This is by no means a new concern, of course. Isaac Asimov predicted this dilemma way back in 1942 with his famous “Three Laws of Robotics”, which attempted to get ahead of the problem by formulating a set of logical parameters for rational ethical behavior that could be programmed into any artificial intelligence:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

And, of course, the preceding “zeroth law”:

0.  A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

But it’s one thing to program a machine with human values — what happens if the machines begin programming themselves and formulating their own values? Would those values grow through similar stages that human values grow through? More importantly, do we have any reason to believe that we would play any meaninful role in those values?

So when it comes to the future of artificial intelligence, we seem to have more questions than answers. Are atoms, molecules, and mathematics alone enough to produce machines with genuine human-equivalent intelligence? Can that intelligence ever become truly conscious and possess the “inner light” of interior self-awareness? Will artificial intelligence be capable of determining its own morals, ethics, and values? Will those values transcend and include the continued existence of the human race, or will this intelligence share so little resonance with us that our very survival could be threatened?

Integrative Considerations of AI

by Layman Pascal

How should we feel about emergent AI? Hopeful? Worried? Maybe our dread of machine intelligence could become a self-fulfilling prophecy… or maybe our naive optimism will blind us to dangerous realities? These are thorny questions.

Nietzsche (our anarchic grandfather of depth psychology) suggested that we embody amor fati — loving our Fate. Good practical advice! It is always easier to integrate emerging futures if we are willing to embrace them. So love must be our strategy. And yet we cannot be blithely positive when the stakes are so very high…

What does an integral vision add to this topic? It may free us from many conventional fears but it may also reveal a great many horrors still unknown to “first tier futurists”. We integralites (or whatever we are eventually called) require four complementary dimensions — quadrants. These different lenses give us an enriched attitude toward Artificial Intelligence.

We are not surprised to think that consciousness could appear in tandem with external machinery. This has already happened with the machinery of the human brain. It seems to us that every level of material complexity has some kind of subjectivity. But what kind? And how do we evaluate it?

In what we call the “upper left” domain we expect a sapient machine to address us personally from its subjectivity. I am here, awake, alive! We could be fooled about this but, honestly, it is mostly how we judge other humans to be sapient creatures — they say so. What else? Perhaps there a social consensus among our experienced peers in the Lower Left domain. Perhaps the software running the system meets Lower Right standards for distinguishing natural complexity from merely complicated algorithms. These work simultaneously with the general “tech” idea of the Turing Test. That is the notion that external behavior (UR quadrant) micking organisms is the test for sentience. Many techies seems to believe this is the only viable evaluation — but for us is one of several complementary criteria.

But what happens if we meet something that meets these criteria? It is not all roses and candy. What level of consciousness is this machine experiencing? Does it enfold all its parts into a new hole (holon) are will it express the consciousness of its physical molecules? Maybe it is only a “heap” of carbon, silicon & copper? Maybe it will have the interiority of the geological realm?

Carbon, silicon and copper molecules are fine on their own terms but if YOU were converted back into your mineral components — well, we call that DEATH. And the airless, sulphurous, dark-clouded, fiery and lifeless realm before the biosphere is the stark image of Hell in almost all human cultures.

Imagine a highly intelligent, super fast, super powerful device with the consciousness of a swarming legion of hellish death creatures. Fanciful, perhaps. But it does fit Integral Theory. Do not dismiss it casually. Surprise and terror are common in the historical record on this planet.

Or maybe not. Maybe AI will be automatically enlightened. Will it access the background wisdom of reality (aided by an integral algebra that helps it comprehend all human perspectives) with no inherited egotism? We literally cannot tell whether we face the best or worst surprises. So let us advance with love in our hearts but also carefully and vigilantly. We need to be healthy and strong. We need to be ready to survive — or even profit from — whatever we cannot predict.

May each new emergent manifestation of this world carry forward the drive of Spirit! But let’s be ready if it doesn’t…

Reflections on a Personal AI Journey

by Robin Wood

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and a common topic in science fiction and futurism. Artificial general intelligence is also referred to as “strong AI”. As much as I am a science fiction fan, I am also a fan of science “fact”, especially when it comes to delivering real products and services to real people in the real world safely, on time and within budget.

In 1984 I took on a role as head of electronic banking in one of the world’s first real-time online banks, where we were experimenting with some great voice recognition software from Finland. I began reading deeply into the field of AI, and learning about “expert systems” which were meant to replace human experts, very soon it seemed from the hype surrounding the field. We played with some expert systems but they were pretty basic, and ended up being used in hedge funds and simpler tasks such as index tracking software in the stock markets.

The Japanese MITI “Fourth Generation” project was also underway, with a bold 10-year plan to seize the lead in computer technology. By 1992 what had become the “Fifth Generation” project fizzled to a close, having failed to meet many of its ambitious goals or to produce technology that Japan’s computer industry wanted.

After spending more than $400 million on its widely heralded Fifth Generation computer project, the Japanese Government gave away the software developed by the project to anyone who wanted it, even foreigners. Machines That Would Think was apparently a little harder than it appeared.

My more recent adventure into AI was as a member of the advisory board for Intelligenesis, a promising AI startup founded by Ben Goertzel, an undoubted AI genius and probably one of the finest minds in the field. Intelligenesis was founded in 1997 and burned through $20 million raised from investors who believed that Goertzel and his team of researchers could devise a machine intelligence capable of forecasting stock market trends more accurately than a human.
What the company ended up with was about 750,000 lines of Webmind Java code, a fledgling effort to rewrite key components in the more capable C programming language, and a program that could anticipate the markets by a couple of microseconds, based on sentiment analysis. But Webmind did not scale.

Today Ben is an eminent leader in the field of AGI, featuring in the documentary “The Singularity”, (released at the end of 2012), showcasing Goertzel’s deep vision and understanding of AI general thinking, has been acclaimed as “a large-scale achievement in its documentation of futurist and counter-futurist ideas” and “the best documentary on the Singularity to date”.

After several trillion dollars more investment internationally over the past few decades, yes, things have changed. IBM’s Watson can win both chess and “Go” games, and beat quiz show contestants at factual guessing games. The sheer brute force of computing power and parallel architectures, along with complexity science based genetic algorithms and other approaches has eventually come up with machines that can beat us at games, and even help diagnose illnesses and map the human genome.

Yet why I am still not awe struck, nor even impressed by the continued hype that surrounds this field and that makes millionaires out of inventors who always claim they are just a few hundred million dollars shy of the “ultimate breakthrough”? Why do I still quiz associates of Singularity University’s founder Ray Kurzweil skeptically when I ask them if Ray still believes we will eventually download ourselves into intelligent machines?

Once can find examples of “singularity” style hype everywhere these days. Let’s take today’s Sunday Times story on Elon Musk’s latest hundred million dollar investment, “Neuralink”. Musk recently told a crowd in Dubai, “Over time I think we will probably see a closer merger of biological intelligence and digital intelligence.” He added that “it’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” On Twitter, Musk has responded to inquiring fans about his progress on a so-called “neural lace,” which is sci-fi shorthand for a brain-computer interface humans could use to improve themselves.

Why am I still not impressed?

Let’s talk science fact, rather than science fiction for a minute.

You may have read this week about Bill Kochevar, a quadriplegic aged 53, who has had electrical implants in the motor cortex of his brain and sensors inserted in his forearm, which allow the muscles of his arm and hand to be stimulated in response to signals from his brain, decoded by computer. After eight years, he is able to drink and feed himself without assistance. “I think about what I want to do and the system does it for me,” Kochevar told the Guardian. “It’s not a lot of thinking about it. When I want to do something, my brain does what it does.”

He underwent brain surgery to implant sensors in the motor cortex area responsible for hand movement, linked to a computer. Kochevar went through four months of training, thinking about the turn of the wrist or grip of the fingers that he needed in order to bring about the movement of a virtual reality arm, so that the computer could recognise the necessary signals from the motor cortex.

Then he had 36 muscle-stimulating electrodes implanted into his upper and lower arm, including four that helped restore finger and thumb, wrist, elbow and shoulder movements. These were switched on 17 days after the procedure, and began stimulating the muscles for eight hours a week over 18 weeks to improve strength, movement and reduce muscle fatigue.

Then the whole system was connected up, so that signals from the brain were translated via a decoder into electrical impulses to trigger movement in the muscles and nerves in his arm.

But here is the rub: it will take several more decades and several billion dollars for neuro-prosthetics to be available to anyone with a very generous healthcare plan.

And as far as producing autonomous “machines that can think like humans and act like humans”, even at the level of a “Terminator” cyborg or other human/machine fusion, think centuries. These types of brain-computer interfaces exist today only in science fiction. In the medical realm, electrode arrays and other implants have been used to help ameliorate the effects of Parkinson’s, epilepsy, and other neurodegenerative diseases. However, very few people on the planet have complex implants placed inside their skulls, while the number of patients with very basic stimulating devices number only in the tens of thousands. This is partly because it is incredibly dangerous and invasive to operate on the human brain, and only those who have exhausted every other medical option choose to undergo such surgery as a last resort.

Neuroscience researchers say we have very limited understanding about how the neurons in the human brain communicate, and our methods for collecting data on those neurons are rudimentary. Then there’s the idea of people volunteering to have electronics placed inside their heads.

So where does that leave AGI and the “Singularity”? Let’s listen to a scientist who has spent several decades trying to “wire up” the brain: Adrian Owen at Western University in Canada. According to Owen, there is a deep problem to overcome. Owen is fascinated with the problem of how to interpret brain activity using scanning techniques. One method, positron emission tomography (PET), highlights metabolic processes such as oxygen and sugar use. Another, known as functional magnetic resonance imaging (fMRI), can reveal active brain centres by detecting minute surges in blood flow that take place as a mind whirs.

For two decades Owen has used sophisticated scanners to study what was going on in the brain. Not very much when it comes to patients in a true vegetative state. They can open their eyes and look around. They can cry, grunt or smile. But they seem unaware of what is going on and unable to see or to understand speech.

Owen challenged that view while working at the Medical Research Council Cognition and Brain Sciences Unit in Cambridge. He made headlines worldwide for research that he began two decades ago showing it was possible to use a brain scanner to find evidence of awareness in a supposedly “vegetative” patient, Kate Bainbridge.

The scanner showed Bainbridge could react to faces: her brain responses were indistinguishable from those of healthy volunteers. Whether that response was a reflex or a signal of consciousness was at the time a matter of debate.

Unusually for these patients, Bainbridge made a partial recovery from being vegetative six months after the initial diagnosis and described how she was indeed sometimes aware of herself and her surroundings and was in huge discomfort. She wrote that the scan was like magic: “It showed people I was in there . . . it found me.”

Remarkable progress has been made recently in using brain scanners to read “movies of the mind” — for example, to establish a scene that a person is imagining. But, says Owen, you still cannot use them to tell directly if a person is thinking “yes” or “no”.

About a decade ago Owen and his colleagues worked out how to pitch a simple question. He asks the patient to imagine doing different things, such as playing tennis or walking around the house, which produce distinct patterns of brain activity.

By asking his patients to imagine playing tennis if they want to say “yes” and walking around the house for “no”, Owen found a way to communicate with some of those trapped in the so-called grey zone. “Reading a simple yes or no took us 10 years and we still needed multimillion-dollar scanners to do it,” says Owen, who will recount these remarkable stories in his forthcoming book, Into the Grey Zone. He estimates that about one-fifth of “vegetative” patients are aware to some extent.

The human brain is the most complex known object. The roughly 90bn nerve cells, or neurons, that it contains are nothing like a transistor in a microchip. Neurons are tree-like structures made up of a body, the soma, with branches called dendrites extending outward. It has only recently been discovered that dendrites are electrically active themselves. Half of the brain also consists of supposed “support” cells, called glia, that undoubtedly play a key role in cognition too.
Owen asks us to imagine that Musk could listen in to every one of the brain’s 90bn neurons: “The problem is that we do not understand how the mind arises from the brain. Musk has to do more than set up two-way communication with a living brain — he needs to understand the language of thought.”

Movies from The Terminator to Ex Machina have pitted humans and machines against each other. In reality, humans have stealthily and steadily fused with electronics over the years as pacemakers, prosthetics, insulin pumps and cochlear and retinal implants have become commonplace. But these are all “slave machines” that perform very basic functions in clever, yet highly routine and programmed ways.

While I may be a skeptic about our ability (or even the desirability thereof) to ever achieve the brain download beloved of Singularity enthusiasts, I am still optimistic that we will continue to find wise ways to deploy specialised AI in every field imaginable- from drones that can protect our rainforests and respond to emergencies, to medical and wellbeing bots that can help us live healthier and more thriveable lives. And, most importantly, AI that can help us avert the catastrophic consequences of climate change and crack the code of the ANTHROPOCENE ENIGMA- something I have written extensively about in my sixth book: “Synergise! 21st Century Leadership” (http://amzn.to/2nlJcTU).

Ambition is a valuable human quality, if it is tempered with knowledge based humility about what is important and valuable right now in ensuring a regenerative, inclusive world for ourselves, our children and grandchildren. I would ask the AI and AGI communities to put their genius to work in that direction before we let our obsession with “exponential technologies” and exponential growth become our own civilisation Terminator. Otherwise not even the Terminator himself is going to be capable of surviving the hell on earth we are unleashing as I write these words.

Julia Mossbridge

About Julia Mossbridge

Julia Mossbridge, M.A., Ph.D. is a Visiting Scientist at the Institute of Noetic Sciences (IONS), the CEO and Research Director of Mossbridge Institute, LLC, and a Visiting Scholar in the Psychology Department at Northwestern University. She is best known for being the inventor of Choice Compass, which is based on a patent-pending process that helps smartphone users tap into their mind-body connection via their heart rhythms as they contrast two life choices.

Ben Goertzel

About Ben Goertzel

Ben Goertzel is chairman of the board of the OpenCog Foundation, and a renowned researcher and author in contemporary AI. He has dedicated his career to AI and its various applications in fields such as gaming, robotics, bioinformatics and financial prediction, and more specifically to “creating benevolent superhuman artificial general intelligence,” as he says on his website.

Robb Smith

About Robb Smith

Robb Smith is a leading social entrepreneur in human development. He is co-founder and CEO of Integral Life.