It is fashionable to worry about robots taking over the world. This seems unlikely to me. As promising as artificial intelligence (AI) seemed a few years ago, we have now seen the downsides as the proliferation of the technology has become more visible. The prospects of AI achieving the singularity seem too distant to imagine.
Where AI has advanced beyond human capability is in narrowly defined tasks where the perceptive ability of the technology exceeds that of humans. The newest methods excel at perceptual tasks such as classifying images and transcribing speech. Yet, as our understanding of the limits of the technology has improved, our understanding of the risks has improved as well. To wit:
1. Self-crashing cars
No voice has been louder in proclaiming the promise of self-driving cars than that of Elon Musk, the founder and visionary behind Tesla. As the fatalities pile up, Musk has been forced to moderate his claims. Nevertheless, not only Tesla but also the major Detroit manufactures – GM and Ford – as well as upstarts like Uber are racing to commercialize their latest advancements. In May, the New York Times projected that each will invest an additional $6 to $10 billion to advance the self-driving technology.
So what happens next? I predict the greatest advancements – pioneered by companies like Plus AI -- will come in the long-haul trucking industry. Plagued by a shortage of qualified drivers, fleets are likely to deploy a parade of driverless tractor-trailers on the interstates. They’ll be met by local drivers when they exit the highway to take them the last few miles to their destination.
2. Political manipulation
Russian interference in the 2016 presidential election was aided by data breaches that took advantage of Facebook’s data sharing practices. Facebook CEO Mark Zuckerberg promised Congress that AI would be trained to identify and block malicious content. Yet, five years later, we are learning the depths of Facebook’s incompetence and bad intentions from whistleblower Frances Haugen. The Wall Street Journal has run a series exposing the details, all of which suggest that, even when the company tries to do the right thing, it consistently comes up short.
We can expect to see the flaws in social media platforms continue to be exploited in years to come. It’s not clear that there is a path forward that will eliminate the bad actors.
3. A surveillance state
AI’s superhuman ability to identify faces and correctly analyze a person’s emotional state can be used for good in, for example, the medical profession. However, the rapid deployment of surveillance technology in countries like China demonstrates how governments can use it to control the population.
In the near term, it will show up in webcams and vehicles with some marginal benefits. It could be used to curb voter fraud or drunk driving, for example. But civil libertarians warn of a dystopian future. AI has already been used in the U.S. by law enforcement – and, as I reported in an earlier post (Jailed by Artificial Intelligence) not always for the better.
4. Fake it till you make it
The number of deepfake videos posted annually is now estimated in six figures by the FBI. They can be used to portray politicians saying things they never said; to project celebrities into pornography; or, by authoritarian regimes to promote disinformation. They are so common that a parallel threat has also become common: public doubt that a political figure actually said what he or she said.
5. Algorithmic discrimination
Facial recognition systems consistently fail to correctly identify people of color. As these systems proliferate, they promise to deny access or foster discrimination. Commercial tools intended to predict the success of job candidates have already been shown to favor white males. The deployment of such tools into the criminal justice system promises to perpetuate bias in the courts.
In my new book, The Awakening of Artemis, a character living in a future world has won a Nobel Peace Prize for solving this problem. As he puts it:
“We – the nation – had a terrible problem. The bias of law enforcement officers and judges had led to disproportionate imprisonment of people with dark skin – African Americans and Latinos. Artificial intelligence made it worse. Before we used AI to make sentencing decisions, a biased judge might negatively affect a few hundred people. Using AI to normalize those decisions affected millions.”
The normalization of AI-based decision systems has the potential to affect billions of people, often in ways that are invisible to its victims. How will medical decisions based on bad data affect patients? How will poorly designed facial recognition systems affect suspect in criminal cases? How many lives will be sacrificed to the trial-and-error development of automated driving systems? How many elections will be corrupted by bad actors?
Only time will tell.
Comments