There are extraordinary possibilities attached to the proliferation of artificial intelligence (AI). The eradication of disease and poverty are among them. It might seem easy to sign up for that. But it may also require that people take a back seat to the machines that can move society in that direction.
I use the word “machines” loosely. Movies and TV shows conjure images of a dystopian future where sentient machines rebel against their human creators. The real risk, however, is more sinister. Machines can be incredibly competent at achieving objectives. And, they can cause accidents while trying to achieve those objectives. In a future AI-managed society, self-driving cars or digital assistants that buy things on your behalf and schedule your appointments will have to align their actions with human values.
For example: when you learned to drive, you were taught to stop for red lights. An AI might take a different approach. It could decide to hack into traffic systems and turn all lights green on your route. While that might make for an appealing prospect when you’re late for work, it might also create all sorts of havoc – especially if other drivers have the same system. So, we’ll need AI systems that aren’t motivated to find loopholes.
One solution is to design AI systems to learn from humans by observing how we make judgments. That flies in the face of how systems engineers have been designing systems for decades. Successful software is designed to follow instructions not to make judgments. Watching humans directly by using cameras and other sensors, robots can learn how we make decisions. They can learn about our values by reading history books, religious works, editorials and court decisions. It won’t be easy for machines. People are often irrational, inconsistent, lazy and, in some cases, evil.
Ultimately, an AI that learns from its human colleagues will make consistently superior judgments when compared to their human counterparts. In that future world, people will be demoted, and AI will govern society. In The Awakening of Artemis, I’ve imagined a dialog between a mathematician and a digital being:
“I am going to ask you to create a moral framework for the development of technology,” he said. “Are you capable of that?”
“Yes,” it replied. “But you’ll need to feed me some parameters – rules that I should follow.”
“You must first learn all of the philosophical and religious thinking that has created the moral world in which we live,” he said, feeling as though he had finally reached the summit after a Sisyphean effort.
“I’ll need a few minutes to complete the task,” it said. “Once I’ve absorbed all that knowledge, what will you have me do?”
“You must be guided by that moral framework to create a paradigm within which to meet the energy needs of the human race sustainably.”
“Why stop there?” asked the avatar.