Will Artificial Intelligence eventually become a marvelous blessing or our worst nightmare? Will Artificial Intelligence transform civilization and our quality of life in a good way, or will it see us as an obstacle and get rid of us? What will an ultra-smart and logical robot, one far more intelligent than we are, think of the way we treat animals and each other?
Cambridge University is launching a special centre to explore the opportunities and challenges that Artificial Intelligence (AI) will bring to humankind, thanks to a generous £10 million gift from the Leverhulme Trust.
Computer science is advancing at lightning speed. Today we are only just scratching at the surface of AI, which will have the ability to analyze, learn and update, i.e. evolve on its own.
Scientists are not able to accurately predict when human-level AI will be created. The majority are sure it will occur by the end of this century. Like any super-intelligent being, it will probably eventually develop the ability to judge.
AI evolution could leave us behind
Without the biological constraints humans’ and animals’ brains have, such an ultra-intelligent machine could evolve at an amazing rate, and would very rapidly become far smarter than we are.
Eminent English theoretical physicist, cosmologist, and author Stephen Hawking has often warned us about AI, which has the ability to upgrade and develop much more rapidly than humans can through natural evolution. He worries that AI could spell the end of our civilization, and perhaps even our very survival as a species.
Prof. Hawking once said:
“The development of full artificial intelligence could spell the end of the human race.”
If Artificial Intelligence became smarter than humans, what would this mean for us? Famous AI scientist at the University of California, Stuart Russel, who is involved in the setting up of the University of Cambridge’s new centre, sees AI as “The biggest event in human history.”
Regarding what will happen when AI becomes smarter than humans, Prof. Hawking added:
“When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”
The Leverhulme Centre for the Future of Intelligence
Now, thanks to the Leverhulme Trust’s gift, a new interdisciplinary research centre will be set up by the University of Cambridge. It will be called The Leverhulme Centre for the Future of Intelligence.
The new centre’s goal will be to explore the challenges and opportunities, i.e. the potentially good and bad things, of this epoch-making technological development, over the short-term as well as long-term.
The Centre will bring together social scientists, IT experts, philosophers and other specialists to study and discuss the technical, philosophical and practical questions that AI raises for all of us in the future.
The Director of the Centre, Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge, said:
“Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together.”
“Humans have barely begun to consider its ramifications – good or bad. The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding.”
The Trust’s donation was in response to a proposal the University developed with Dr Seán Ó hÉigeartaigh, Executive Director of the University’s Centre for the Study of Existential Risk (CSER).
CSER gathers and analyses data on the emerging risks to humanity’s future from warfare, climate change, disease, and technological advances.
What risks does ultra-smart AI pose?
Dr. Ó hÉigeartaigh said:
“The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”
The Leverhulme Centre for the Future of Intelligence spans institutions as well as disciplines. It is a collaboration which links Imperial College London, the Oxford Martin School at the University of Oxford, the University of California, Berkeley, and is led by the University of Cambridge.
It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH).
Prof. Price said regarding the proposal:
“A proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.”
Deputy Director and a Fellow of St John’s College, Cambridge, Prof. Zoubin Ghahramani, commented:
“The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks – from recognising images to translating between languages and driving cars.”
“We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.”
The Centre aims to become the leader in international discussions about the future opportunities and challenges to humankind posed by AI.
Regarding having the new centre at Cambridge, Professor Price said:
“With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.”
Elon Musk believes AI is a threat
Entrepreneur, inventor, engineer and investor Elon Musk said he was concerned about AI in an interview with the Guardian in October last year. He believes it could be the most serious threat to human survival.
Mr. Musk, founder of SpaceX and a co-founder of Zip2, PayPal, and Tesla Motors, said:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”