Stop the Robots demonstration warns of artificial intelligence
A demonstration in Austin, Texas, led by computer engineer Adam Mason, warned that we must stop the robots and expressed concern that if we are not careful, artificial intelligence (AI) could eventually lead to the end of human civilization.
The small protest, involving just a couple of dozen demonstrators, was held during the South by Southwest Culture and Technology festival. Many of them were computer engineering students from the University of Texas.
The Stop the Robots group could be heard chanting “I say robot, you say no-bot.”
A small group of protesters held signs and handed out t-shirt. (Image: Stop the Robots)
They were all wearing blue T-shirts quoting Elon Musk’s warning from a speech in October 2014 “With artificial intelligence we’re summoning the demon.”
Mr. Mason said the protest was about morality in computing. He says he is concerned about the rapid advances in technology and the risk of artificial intelligence overrunning any type of human control in the near future.
Worry about the potential dangers of artificial intelligence is not just limited to Mr. Mason and his cohorts.
Eminent scientists speak for and against AI
In February, the University of Oxford quoted eminent scientist Stephen Hawking who feared that artificial intelligence (AI) could eventually bring about the destruction of mankind.
In an interview with the BBC in December about an upgrade to the technology he uses to communicate, which involves a basic form of AI, Prof. Hawking said:
“The development of full artificial intelligence could spell the end of the human race.”
While primitive forms of AI developed so far have proved very useful for humans, he fears the consequences of creating something that may match or surpass humans.
“It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded,” he added.
Would we be at the mercy of an artificial intelligence that kept upgrading itself much more rapidly than the biological evolution of humans?
Entrepreneur, inventor, engineer and investor Elon Musk expressed similar concerns regarding AI in an interview with the Guardian in October, declaring it the most serious threat to the survival of humans.
Mr. Musk said:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”
Leading AI researcher Rodney Brooks said regarding people’s concerns about AI:
“I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.”
Would AI systems eventually become sentient?
If an AI system kept upgrading itself until it had all key aspects of human intelligence, would it also be sentient (able to perceive and feel things)? Would it have a mind with conscious experiences?
If it became like us – as intelligent, self-conscious and with feelings – wouldn’t it have the same rights as humans? If it were not granted those rights, would it try to obtain them?
The advent of artificial intelligence brings with it many legal, ethical and moral questions we have not yet addressed.
Entrepreneur, programmer, and venture capitalist Sam Altman wrote in his blog:
“The US government, and all other governments, should regulate the development of SMI (superhuman machine intelligence ). In an ideal world, regulation would slow down the bad guys and speed up the good guys—it seems like what happens with the first SMI to be developed will be very important.”
“The first serious dangers from SMI are likely to involve humans and SMI working together. Regulation should address both the case of malevolent humans intentionally misusing machine intelligence to, for example, wreak havoc on worldwide financial markets or air traffic control systems, and the “accident” case of SMI being developed and then acting unpredictably.”