Alien contact could destroy us all, warns Professor Stephen Hawking. We are sending out ‘greetings from Earth’ messages into deep space in the hope that intelligent life forms out there may respond. If an ultra-advanced civilization receives one of these greetings and comes and makes contact, we would be totally at their mercy – a huge risk for humankind.
Any life form out there that was able to receive our messages, understand them and travel possibly thousands or millions of light years to greet us, would be literally thousands, millions or maybe billions of years more technologically advanced than we are.
What would an alien contact lead to if the extraterrestrials saw us as primitive and insignificant as we view microbes on Earth? Would they feel we deserved compassion, or would they behave with us as we do with very basic life forms on our planet?
Imagine traveling in your own super-fast spaceship to anywhere you like in the Universe. That is what Prof. Hawking does in this short movie. Would you aim for alien contact? (Image: twitter.com/CuriosityStream)
Alien contact may not turn out well for us
Professor Hawking warns that an advanced civilization might treat us like the Europeans treated the Native Americans – “things didn’t turn out so well for them,” he commented.
Prof. Hawking appears in a film – Stephen Hawking’s Favorite Places – in which he travels across the Universe in a spacecraft. In the video, which has been posted online, he travels to Gliese 832c, an exoplanet located 4.93 parsecs or 16 light years (151,400,000,000,000 km) away in the constellation of Grus.
Gliese 832c has an Earth Similarity Index of 0.81, i.e. of all the planets we have detected out there, it appears to be the most similar to ours. It is in the ‘Goldilocks Zone’ or ‘Habitable Zone’ – where temperatures and conditions could be ‘just right’ for life as we know it to exist.
We would all be super-excited if it did have life and we could make alien contact, wouldn’t we? Prof. Hawking says he would to, but also wonders whether that might not be such a good thing.
Would you like to be visited by an aliens who are millions of years ahead of us technologically, and whose IQs are several hundred times greater than ours? We would be completely at their mercy. Prof. Hawking has often warned us of the dangers of alien contact.
We would be at their mercy
We would have to accept that any extraterrestrial civilization that managed to zoom across the vast distances in space to come and visit us would be able to do whatever it wanted with us – we would be totally at their mercy.
If they were a benevolent civilization, that would be great. If they weren’t, we could all be doomed.
Prof. Hawking says on the video:
“As I grow older I am more convinced than ever that we are not alone. After a lifetime of wondering, I am helping to lead a new global effort to find out.”
“The Breakthrough Listen project will scan the nearest million stars for signs of life, but I know just the place to start looking. One day we might receive a signal from a planet like Gliese 832c, but we should be wary of answering back.”
Prof. Hawking has warned us several times about the dangers of being too inviting to whatever life forms there might be out there. When the Breakthrough Listen project was launched last year – with his help – he warned that any alien that we did manage to hear would probably not want to kill us – in fact, they would most likely ignore us completely.
Could the story of Little Red Riding Hood and the Big Bad Wolf be repeated with ‘The Little Human Beings and the Big Bad Aliens’?
If you are driving in a country road on your way to visit a relative or friend, would you stop to inspect a tiny anthill along the way, or would you continue driving? For super-advanced aliens travelling through space at warp speed, Earth would seem as insignificant as that small anthill.
Artificial intelligence could wipe us out
There are many dangers that could come our way from space and wipe us out, including aliens, asteroids, comets, or stray planets. However, there are also things of our own making that could destroy us completely.
We are currently at the threshold of artificial intelligence. Artificial intelligence is defined as the science of making computers do things that require intelligence when done by humans.
We are starting to make robots that can learn and update periodically. Soon they will be able to update/upgrade on their own. When an artificial intelligence (AI) can upgrade itself every six months, it will advance at thousands of times the speed of human evolution.
Breakthrough Listen, which Prof. Hawking help set up, is the largest ever scientific research program aimed at finding evidence of alien civilizations. According to its website: “The program includes a survey of the 1,000,000 closest stars to Earth. It scans the center of our galaxy and the entire galactic plane. Beyond the Milky Way, it listens for messages from the 100 closest galaxies to ours. The instruments used are among the world’s most powerful. They are 50 times more sensitive than existing telescopes dedicated to the search for intelligence.” (Image: breakthroughinitiatives.org)
Within one or two hundred years, there could be robots that are much more intelligent than humans, and their IQs would be increasing exponentially as they upgrade themselves.
Even if all AIs had a basic command of protecting all human life, they could reach a point when they decide that the best way to protect us is by confining us to our homes or removing our eyes.
Prof. Hawking warns that AIs could eventually become so smart that they might accidentally kills us.
Last year, Prof. Hawking said:
“The real risk with AI isn’t malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
Earlier this year, a group of scientists from the University of Oxford warned that super intelligent robots may one day destroy humankind. In order to reduce the risk of a global catastrophe, the scientists wrote:
“Research communities should further investigate the possible risks from emerging capabilities in biotechnology and artificial intelligence, and possible solutions.”
“Policymakers could work with researchers to understand the issues that may arise with these new technologies, and start to lay groundwork for planned adaptive risk regulation.”
Video – Stephen Hawking’s Favorite Places
In this Curiosity Stream video, ‘Commander’ Stephen Hawking pilots his spaceship the SS Hawking on an epic journey, zooming from black holes to the Big Bang, Saturn to Santa Barbara. After all, he says: “Why should astronauts have all the fun?