Fairy tales teach smart robots not to harm us and to be nice

If you want a super smart robot not to harm us and to be nice you should consider telling it fairy tales, just like human adults do with their children. These stories would help devices with advanced artificial intelligence learn acceptable sequences of events and understand the best ways to behave in human societies, scientists say.

Researchers at the School of Interactive Computing at the Georgia Institute of Technology explained that artificial intelligence (AI) is advancing at lightning speed – a phenomenon that has many people, including eminent scientists scared.

The fear is that ultra-intelligent robots could soon become smarter than us and might act unethically or choose to harm us. They could even harm us, some believe, because they decided through logic that their actions would be in our best interests. For example, they could decide that if all humans are grounded (restricted to their homes) we might stop killing each other or polluting the planet.

Quixote teaching robots fairy tales so they learn morals and ethicsThe Quixote system teaches robots how to behave like the protagonist when dealing with humans. It is a part of a larger effort to build an ethical value system into new forms of AI. (news.gatech.edu)

There is no user manual for being human

A sizeable proportion of the population is calling for a ban on robotics or AI research, while others are urging policymakers to make more funds available to find ways of controlling future AI.

The problem with robotics and AI is that it is all new territory for us, i.e. there is no ‘user manual’ for being human.

What is a robot?

Associate Professor Mark Riedl and Research Scientist II Brent Harrison, both from Georgia Institute of Technology’s School of Interactive Computing, say the answer lies in ‘Quixote’. They explained their approach at the AAAI-16 Conference in Phoenix, Arizona.

Quixote teaches ‘value alignment’ to robots and devices with artificial intelligence by training them to read stories, learn appropriate and acceptable sequences of events, and understand the best ways to behave in our (human) societies.

Telling a robot fairy talesTelling robots fairy tales helps build up their moral reasoning.

Stories teach kids how to behave

Prof. Reidl, who is director of the Entertainment Intelligence Lab, said regarding the stories different cultures tell their children:

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature.”

“We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

With the Quixote technique, AI’s goals are aligned with those of human values, and they are rewarded when they behave in a socially acceptable manner.

It builds upon Prof. Riedl’s previous research – the Scheherazade system – which showed how AI’s could gather a correct sequence of actions by crowdsourcing story plots from the Internet.

Scheherazade learns what a normal or ‘correct’ plot graph is. This data structure is then passed along to Quixote, which converts it into a ‘reward signal’ that reinforces specific behaviours and punishes undesirable behaviours during a trial-and-error learning.

Robot tasked with getting a prescription ASAPWhen given the task to pick up a medical prescription ASAP, the robot has many choices. Logic would tell it to grab the medication and run (rob the pharmacy), it would be the fastest and cheapest option. However, it is not the way we do things.

A system of rewards and punishments

Quixote, in essence, learns that if it acts like the protagonist in a story it will be rewarded, but not if it acts randomly or like the antagonist.

Imagine a robot is given the task of picking up a prescription for a human ASAP (as soon as possible). The robot could:

– Rob the store (pharmacy), grab the medication and run.

– Interact politely with the pharmacist or pharmacy assistant.

– Stand in the queue and wait its turn.

Without the benefit of positive reinforcement and alignment, a robot would learn that the fastest and cheapest way to get the task done would be to rob the pharmacy.

With Quixote’s value alignment, the robot would be rewarded for patiently waiting in the queue and paying for the prescription.

Harrison and Riedl show in their research how a value-aligned reward signal can be produced to uncover all the possible steps in a given scenario, map them into a plot trajectory tree, which the robotic agent then uses to make ‘plot choices’ – a bit like what we might remember in a Choose-Your-Own-Adventure novel – and get rewards or punishments based on what it chose to do.

Quixote teaches basic moral reasoning

The Quixote technique is ideal for robots that have a limited task, but need to interact with human beings to achieve it. It is a primitive first step towards general moral reasoning, the scientists explained.

Prof. Riedl said:

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior.”

“Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

Citation: Using Stories to Teach Human Values to Artificial Agents,” Mark O. Riedl and Brent Harrison. School of Interactive Computing, Georgia Institute of Technology, Atlanta, Georgia, USA.


Video – Artificial Intelligence

Smart robots are ‘intelligent’ because they have AI. AI or Artificial Intelligence refers to software technologies that make computers, robots, and other devices mimic how we think and behave.