Super intelligent robots may destroy humans warn scientists

Super intelligent robots may destroy humans, a group of scientists from the University of Oxford warns. They also see nuclear war, climate change, a natural disaster such as asteroids or super volcanoes, and a killer virus spreading across the world as other potential killers of humanity.

They believe the British Government, as well as governments in the European Union, United States, Canada, Japan and the rest of the world are failing miserably to prepare properly for looming catastrophes, some of them of our own making.

We may laugh at such a sci-fi sounding warning – robots could wipe us all out – but the authors of a report titled ‘Global Catastrophic Risks’ say these apocalyptic dangers are more likely to become reality than many of us realise.

Global catastrophic risks intelligent robotsFront cover of the new report ‘Global Catastrophic Risks’. Authors: Owen Cotton-Barratt*†, Sebastian Farquhar*, John Halstead*, Stefan Schubert* and Andrew Snyder-Beattie†. (* = Global Priorities Project. † = Future of Humanity Institute, Oxford University)

The new report, compiled by experts from the Global Priorities Project, Oxford University and the Global Challenges Foundation, listed the threats that could destroy at least 10% of the global human population.

They warn that while the majority of generations have and will probably never experience a global catastrophe, they are real threats. Just under 100 years ago (1918), Spanish Flu killed millions of people – between 2.5% and 5% of the world’s population.

What is a robot?

In the short-term future – during the next five years – super-volcanic eruptions, impacts from an asteroid, and other unknown risks are ranked as the greatest threat to humanity.

The authors urge the international community to take these threats seriously and improve planning for health systems and pandemics, investigate more thoroughly the possible risks of artificial intelligence and biotechnology, and continue reducing the number of nuclear weapons in our arsenals.

Artificial IntelligenceIn future, AIs will be able to upgrade their own systems on their own, and much more often and rapidly than the evolution of the human brain. One day AI’s will become more intelligent than us. If we don’t set things up properly, we could be developing systems that may eventually do us harm, or even destroy us. (Image: eng-cs.syr.edu)

To reduce the risk of a global catastrophe caused by emerging technologies, the authors wrote:

“Research communities should further investigate the possible risks from emerging capabilities in biotechnology and artificial intelligence, and possible solutions.”

“Policymakers could work with researchers to understand the issues that may arise with these new technologies, and start to lay groundwork for planned adaptive risk regulation.”



We already take steps to reduce tiny risks

Although the risk of dying in a car crash is tiny for each individual driver and passenger, we take steps to reduce the risk by wearing seat belts and driving safely – we even take out insurance.

National Governments take steps to reduce the risk of rare natural disasters (or their consequences), such as hurricanes and earthquakes. In the same way, the global community must work together to reduce the risk of catastrophic events which could have a global reach, the authors say.

Global Priorities Project, Sebastian Farquhar, said in a press conference:

“There are some things that are on the horizon, things that probably won’t happen in any one year but could happen, which could completely reshape our world and do so in a really devastating and disastrous way. History teaches us that many of these things are more likely than we intuitively think.”

“Many of these risks are changing and growing as technologies change and grow and reshape our world. But there are also things we can do about the risks.”

Professor Stephen HawkingProfessor Stephen Hawking fears that artificial intelligence, if we do not prepare the way carefully, could eventually wipe us out. (Image: hawking.org.uk)

According to the Future of Humanity Institute, part of the University of Oxford’s Faculty of Philosophy, some of the predicted destructive events might not cause our destruction directly, but could trigger a series of events that would:

“For many types of destructive events, much of the damage results from second-order impacts on social order; therefore the risks of social disruption and collapse are related to the risks of nuclear terrorism or pandemic disease.”

“Apparently dissimilar events such as large asteroid impacts, volcanic super-eruptions, and nuclear war would all eject massive amounts of soot and aerosols into the atmosphere, with significant effects on global climate. The existence of these causal linkages is one reason why it is sensible to study multiple risks together.”

Elon MuskElon Musk, an engineer/entrepreneur who makes rockets, electric cars and co-founded PayPal, also worries that AI could one day become our ultimate nightmare.

Benefits and dangers of tomorrow’s smart robots

Smart robots could significantly improve the quality of life for millions of people globally. However, one day they are likely to become more intelligent than us, and eventually make decisions on our behalf ‘for our own good’ that we won’t like.

Elon Musk and Prof. Stephen Hawking have publicly expressed their worries about the future consequences of advancing artificial intelligence (AI).

AI could become a wonderful blessing for us, or our worst nightmare. Some experts worry that super-advanced AI could one day see humans as an obstacle, or a threat to life on Earth, and decide to remove us, i.e. wipe us out.

What would an AI more intelligent than humans think of the way we look after Earth’s environment, how we treat animals, and even how we treat each other?

The ever-faster pace in automation development prompted twenty-five scholars from various countries to form the Foundation for Responsible Robotics.

Bill Gates fears AIBill Gates (above), founder of Microsoft Corporation, along with Stephen Hawking and Elon Musk, were nominated for ‘Luddite of the Year‘ because they expressed fears regarding the future of AI. (Image: aib.edu.au)

The Foundation for Responsible Robotics says its mission is:

“To promote the responsible design, development, implementation, and policy of robots embedded in our society. Our goal is to influence the future development and application of robotics such that it embeds the standards, methods, principles, capabilities, and policy points, as they relate to the responsible design and deployment of robotic systems.”

“We see both the definition of responsible robotics and the means for achieving it as on-going tasks that will evolve alongside the technology of robotics. Of great significance is that the FRR aims to be proactive and assistive to the robotics industry in a way that allows for the ethical, legal, and societal issues to be incorporated into design, development, and policy.”

Stephen Hawking warns of human self-destruction risk

In January, during this year’s BBC Reith Lectures, Prof. Stephen Hawking warned that there is a growing risk of self-destruction by the human race, caused by perhaps artificial intelligence, climate change, or a genetically-engineered virus – all events of our own making.

His gloom and doom warning came after a member of the audience asked him: “Do you think the world will end naturally or will man destroy it first?”

Evil intelligent robotWhen an AI becomes more intelligent than us, it will be able to make new robots without our help. If we do not prepare carefully, what will there be to stop a super-smart AI from making an army of machines to destroy us? (Image: rl337.org)

Prof. Hawking answered:

“We face a number of threats to our survival from nuclear war, catastrophic global warming, and genetically engineered viruses. The number is likely to increase in the future, with the development of new technologies, and new ways things can go wrong.”

“Although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or 10,000 years. By that time, we should have spread out into space, and to other stars, so a disaster on Earth would not mean the end of the human race.”

“However, we will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period.”

Prof. Hawking, together with other eminent scientists and computer experts across the world, have regularly expressed concern about the risks we may face in future when AI becomes super smart.

In a BBC interview in 2014, Prof. Hawking warned that AI could spell the end of the human race.

He explained that the basic forms of AI that we have developed so far have been extremely useful and beneficial for humans. However, he wonders what might happen when AI matches our intelligence and then overtakes us, i.e. leaves us far behind in the IQ league tables.

Regarding future AI, Prof. Hawking said:

“It [AI] would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Regarding scientific progress in the UK, EU, USA and other democratic nations, Prof. Hawking said:

“It’s important to ensure that these changes are heading in the right directions. In a democratic society, this means that everyone needs to have a basic understanding of science to make informed decisions about the future.”

“So communicate plainly what you are trying to do in science, and who knows, you might even end up understanding it yourself.”


Video – Artificial Intelligence

AI or Artificial Intelligence refers to software that makes computers, robots, and other machines ‘intelligent.’ In other words, it makes them mimic how we think and behave.