The threat of World War III is with us constantly, says Elon Musk, the business magnate, engineer, CEO of Tesla Motors, CEO and CTO of SpaceX, and Chairman of SolarCity. He believes the world is full of factors which could suddenly become more complicated and have the potential to develop into disasters on a global scale.
Imagine Donald Trump winning the US elections and becoming upset because Mexico won’t pay for the wall along the border, so he pushes the nuclear button, Russian troops invade NATO-member Norway after a GPS malfunction, a Russian-Turkish standoff goes too far, or a Chinese-Japanese wave of rhetoric and threats regarding disputed islands develops into something more sinister.
North and South Korea could go to war at any time, the United States and Japan would join on South Korea’s side, while China would help its northern neighbour, the European Union, Australia and New Zealand would join the USA and Japan, Russia would look on considering whether to help China…., etc.
Albert Einstein said: “I do not know how the Third World War will be fought, but I can tell you what they will use in the Fourth — rocks!”
A ‘new golden age of peace’ was followed by WWI
In an interview with GQ, Mr. Musk said he does not think we can discount the possibility of World War III.
Mr. Musk said:
“You know, in 1912 they were proclaiming a new age of peace and prosperity, saying that it was a golden age, war was over. And then you had World War I followed by World War II followed by the Cold War.”
“So I think we need to acknowledge that there’s certainly a possibility of a Third World War, and if that does occur it could be far worse than anything that’s happened before.”
Humanity has not always moved forward
Mr. Musk wonders what would happen if nuclear weapons were used in the next World War. There could be a powerful anti-technology movement, or religious extremism may suddenly mushroom. “Like, I mean, does ISIS grow…?”
Most people instinctively assume that technology will continue progressing relentlessly. However, there have been times in our history, Mr. Musk points out – after the many advances of the Roman Empire, or following the building of the magnificent pyramids in Egypt – when the civilizations that followed were no longer able to achieve what had been accomplished before.
He wonders whether there may be a misplaced arrogance and complacency in assuming history won’t repeat itself.
The pyramids of Egypt were build several thousand years ago. The amazing engineering achievement was not matched again for several thousand years. Humankind sometimes takes a backward step before moving forward again.
Today’s tech industry is sometimes linked to complacency and arrogance, he believes. He suggests that fears of World War III could set back humankind in several ways.
Colonizing Mars may be necessary
We may have just a relatively short window to set up a colony in Mars, says Mr. Musk, who wonders whether events on Earth may soon not only halt technological progress, but even put things into reverse.
We could be forgiven for describing his concerns as a bit farfetched, he adds. However, it is easy to kid ourselves that humankind will always become smarter.
Professor Stephen Hawking wonders whether artificial intelligence might one day destroy us.
Humans do stupid things
Mr. Musk points out that as individuals, we all do amazingly foolish things. Look at the heads of some countries, and those trying to get into power, and you will be astonished at how stupid and reckless humans can be.
Regarding Mr. Musk, Chris Matyszczyk writes in C|net:
“It’s heartening that someone so committed to hyperloops, electric cars and rockets to Mars is still level-headed enough to realize that humans don’t always make sensible decisions.
Mr. Musk is also scared of some aspects of ‘progress’
Mr. Musk, together with some famous scientists, including eminent theoretical physicist and mathematician Professor Stephen Hawking, have often expressed their fears that Artificial Intelligence and super-advanced robots could one day in the future spell the end of humanity.
Don’t assume humans will always become progressively smarter, says Elon Musk. (Image: Facebook)
In 2014, Mr. Musk spoke out against artificial intelligence (AI), declaring to students from the Massachusetts Institute of Technology (MIT), during an interview at the AeroAstro Centennial Symposium, that AI is the most serious threat to the survival of the human race.
Mr. Musk was quoted by The Guardian as saying:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.”
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
In February this year, Stephen Hawking was quoted by the University of Oxford as saying he feared artificial intelligence. He also mentioned that if not controlled and monitored very carefully, super-smart robots could eventually bring about the destruction of humanity.
In an interview with the BBC in December 2014, while talking about the technology he uses to communicate, which involves a basic form of AI, Prof. Hawking said: “The development of full artificial intelligence could spell the end of the human race.”
So far, he added, all current forms have proved very useful for humans. But this technology is still relatively ‘primitive’. He wonders what will happen when we create something that will match or surpass humans.
“It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded,” Prof. Hawking he warned.
Imagine we create an advanced artificial intelligence in the future that keeps upgrading itself until one day it decides that the biggest danger to the destruction of this planet is us. What measures would it take to ‘protect’ us?
Video – Is Elon Musk the superhero or supervillain?