top of page

The Existential Threat of Artificial Intelligence



You may have heard about the pathogen accidentally released from a lab in Wuhan, China, that has caused the death of nearly seven million people. There are fifty-nine laboratories in the World Health Organization’s Emerging and Deadly Pathogens Laboratory Network, which sets safety standards. Nevertheless, the pathogen that caused a global pandemic escaped somehow.

 

You may also have heard about the rogue Pakistani scientist Abdul Qadeer Khan, who invented a new kind of centrifuge to enable the development of nuclear weapons—a first in the Muslim world. He is credited with releasing the technology to Iran and North Korea.

 

You may have heard of the artificial intelligence (AI) technology that has escaped from the computer on which its programmers had created it, causing havoc and death to millions.

 

Well, maybe you haven’t heard about this last one because it hasn’t happened—yet!

 

The list of potential benefits of AI is long. And some of those benefits could improve the lives of people around the world. Perhaps most promising is the combination of AI with DNA analysis to create medical treatments that are unique to each patient. If successful, the resulting technology could enable people to have healthy and functional lives past their hundredth birthdays. Aside from medical outcomes, AI can improve business efficiency, food production, and the quality of our work. It also presents some clear dangers that must be addressed. If we fail to do so, we face an existential threat.

 

Does that seem farfetched to you?      

 

Generative AI takes its assigned tasks seriously. And it's difficult for us fallible humans to create instructions for those tasks without risk of unanticipated effects.

 

We accept certain risks in exchange for utility. Automobiles create traffic, pollution, deaths, and injuries. Yet, we accept those risks. And we regulate to improve those undesirable side effects. As a global society, we endeavor to contain the worst consequences of the most dangerous technologies. Nuclear weapons are the best example. There are treaties and international organizations to ensure compliance with rules to control their spread. In neither of these cases are we perfect. There are still traffic deaths, and nuclear weapons are proliferating outside the global treaty structure we’ve created.

 

So, imagine the risks of a technology that teaches itself to be more effective and has the potential to outthink its human masters. And there are accelerants—bad actors from nation-states like China and Iran down to individuals who might seek to terrorize whole societies.

 

Hardware like Unmanned Aerial Vehicles (drones) that carry weapons could be enabled to select their own targets and fire without human intervention. Imagine such a weapon in the hands of a terrorist. The AI could decide that tourists on a Danube River cruise, representatives of the Great Satan, should be executed as they debark in Prague. Or, pathogens like COVID-19 could be created and released to the global population without human intervention. 

 

While nuclear weapons technology is complex, expensive, and challenging to develop and convert into weapons, the barriers to entry are much lower with AI. It can be acquired cheaply and deployed by those who can write computer code—literally millions of people. Imagine the havoc that someone like Ted Kaczynski, the so-called Unabomber, might have unleashed if generative AI had been at his disposal.

 

The European Union has been leading the charge to regulate AI. However, regulators always defend against the last crisis and rarely foresee inevitable risks. Technocrats with the vision to anticipate the worst and control development without stifling innovation are a rare breed. Their task is arduous since legislation of this type takes years to develop, and the tech is advancing each and every day.

 

So, governing bodies have to ask themselves a broad array of questions:

 

1.     Does the technology offer the potential for geopolitical strategic advantage?

 

2.     Does it have autonomous capabilities?

 

3.     Are the cost and pace of development coming down?

 

And then, they must create a regulatory structure that addresses the risks associated with the answers to those questions. Ultimately, what must be done is to create a global body to reduce the risk, not unlike what we do to reduce the proliferation of nuclear weapons. A practical framework will be challenging to put in place and may take decades for international bodies to implement.

 

But we must address the risks to enjoy the benefits without fear of the devastation of the human race.

35 views0 comments

Recent Posts

See All

コメント

5つ星のうち0と評価されています。
まだ評価がありません

評価を追加
Post: Blog2_Post
bottom of page