Artificial intelligence imagine the worst to prepare for the worst

We should prepare for the worst that can happen with artificial intelligence by first imaging the worst, say a computer scientist and entrepreneur-hacktivist. If you want to make sure artificial intelligence (AI) works for us, and not the other way round, or eventually be the cause of our total destruction, you first need to make a list of all the horrible things that could happen.

Roman Yampolskiy, a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville, and Frederico Pistono, an award-winning journalist, author, entrepreneur and hacktivist, have created a list of worst-case scenarios for a potential malicious artificial intelligence from bringing about the end of our civilization, enslaving us, setting us against one another, or destroying the Universe.

Their study and conclusions have been published in arXiv.org (citation below), an online repository of electronic pre-prints of scientific papers.

Artificial intelligencce preparing for the worstOne day an AI system will be smarter than us, and it will continue upgrading itself regularly, thus leaving us further and further behind. Would we be at its mercy? Would be be glad we had taken measures to protect ourselves? Or is it impossible to be prepared? (Image: rand.org)

Approach AI threat like cybersecurity specialists do

Prof. Yampolskiy believes that it is only by anticipating as many negative outcomes as possible that we may have a chance to guard against a future possible disaster, just as cybersecurity specialists do when researching for vulnerabilities.

Prof. Yampolskiy said:

“The standard framework in AI thinking has always been to propose new safety mechanisms.”

However, we should approach the problem with a cybersecurity mindset, he argues. We should begin with a list of everything that could go wrong, and then determine what safeguards we would need to put in place to protect ourselves.

One nightmare scenario the authors foresee are humans turning against one another. An artificial intelligence system triggers a worldwide propaganda war that sets governments and their whole populations in opposition, which feeds a ‘planetary chaos machine’.

Gates Musk and HawkingBeware of talking about your AI fears – the pro-AI camp may come after you. After publicly voicing concerns regarding the future potential threat of AI for humans, Bill Gates, Elon Musk and Prof. Stephen Hawking were nominated for Luddite of the Year 2015.

Much ado about nothing?

While world famous scientists and business leaders, including Gill Gates, Elon Musk and Stephen Hawking, have expressed concern about the future of AI, others insist such fears are exaggerated and unfounded.

Prof. Hawking once said: “The development of full artificial intelligence could spell the end of the human race.”

Mark Bishop, Professor of Cognitive Computing at Goldsmiths – University of London, and Chair of AISB, the world’s oldest society for Artificial Intelligence, does not believe AI will ever harm us maliciously. In a New Scientist article he authored in 2014, he gave us three reasons why we should not see AIs as potential threats:

They lack genuine understanding: like in the Chinese Room Argument, a thought experiment by John Searle, a US philosopher. It shows that a computer program may seem to understand Chinese stories by answering questions about them appropriately, but does not genuinely understand anything of the interaction.

Computers have no consciousness: an argument he makes, called Dancing with Pixies, is that if an AI experiences a conscious sensation as it interacts with the world around it, “then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on. If we reject this wider state of affairs – known as panpsychism – we must reject machine consciousness.”

AI lacks mathematical insight: Oxford mathematical physicist Roger Penrose argued in his book – The Emperor’s New Mind – that the way human mathematicians provide many of the ‘unassailable demonstrations’ to verify their mathematical assertions is fundamentally non-computational and non-algorithmic.

Terminator RobotA robot from the Terminator film series. Do you agree with some scientists who say such evil robots or AI systems only exist in science fiction, and will never really happen? (Image: digitalmediaacademy.org)

Andrew Ng, Chief Scientist for Baidu, a Chinese internet giant, says that worrying about killer robots is like worrying about the overpopulation of Mars.

Prof. Yampolskiy reminds us what happened to Microsoft’s Twitter chatbot, Tay, which went literally off the rails when it was tricked into becoming a raving racist. While relatively harmless, the incident does remind us of how unpredictable such systems are. None of the researchers had ever imagined such an outcome for the chatbot.

Prof. Yampolskiy said:

“I would like to see a sudden shift to where this is not just a field where we propose solutions. I want this interplay where you propose a safety mechanism but also ask can we break it?”

Many experts agree that the approach to testing future AI should be inspired by cybersecurity, however, a sizeable proportion of them insist that the notion of a malevolent AI system is in the realms of science fiction.

Ultimate nightmare robot nuclear buttonImagine we create a super-intelligent AI with all the safety precautions programmed into its software. However, an evil genius scientist creates a virus that undoes those precautions. What would happen then?

Who wants to create a malevolent AI?

Purposeful creation of a malevolent artificial intelligence (MAI) could be attempted by people from several different walks of life. Each individual would bring his/her own goals and resources into the equation.

The authors emphasise that what is important to understand is just how prevalent such attempts will become and how many individuals will be trying to create an MAI.

Below is a brief list of the types of people or entities that may want to create an MAI:

Military: to develop robot soldiers and cyber weapons.

Governments: to control their populations, take down other governments, and to establish hegemony.

Corporations: to get rid of the competition illegally. To try to achieve monopoly.

Criminals: to use AI as a dominance tool. To try to take over the world.

Black Hats (hackers): to try to steal information or resources. In their attempt to destroy cyberinfrastructure targets.

Doomsday Cults: to make their prophecies of the end of the world come true.

Depressed Individuals: aiming to use AI to commit suicide.

Psychopaths: in their quest to become famous or be remembered in history books.

AI Risk Deniers: as they try to demonstrate that AI is not a threat to us, and so ignoring caution.

AI Safety Researchers: the unethical ones might try to justify funding and secure jobs by deliberately creating problematic AI.

AI imposes curfewHumans have a history of killing each other in wars, muggings, bank robberies and terrorist attacks. We pollute our planet, which places thousands of species under threat of extinction, in fact, many have already vanished. What would we do if an AI that was smarter than us decided to impose a permanent curfew for our own safety, and also to protect all life on Earth?

Yampolskiy and Pistono’s research was supported by a fund established by Elon Musk, who once said AI is our ‘biggest existential threat’.

In a summary that describes their article in arXiv, the authors wrote:

“Availability of such information [design of malevolent machines] would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species.”

“This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).”

Citation: Unethical Research: How to Create a Malevolent Artificial Intelligence,” Federico Pistono & Roman V. Yampolskiy. arXiv. Submitted on 10 May 2016. arXiv:1605.02817 [cs.AI].


Video – Artificial Intelligence