Should there be a limit on what machines decide? Is it right for a machine to decide how long somebody stays in prison? Should machines decide when or whether you get a heart transplant?
We are using artificial intelligence in both everyday life and life-altering decisions. This brings up complex questions regarding accountability, privacy, and fairness.
Many people are uncomfortable with the idea of surrendering our authority to machines. On the other hand, artificial intelligence technologies could help us move beyond human biases. They could make better use of our limited resources.
What should machines decide? A research project
At Princeton University, there is an interdisciplinary research project that tries to address these issues. Princeton Dialogues on AI Ethics brings scholars, philosophers, ethicists, policymakers, and engineers together to talk about the ethics of AI.
Ed Felten, Director of CITP (Princeton’s Center for Information Technology Policy), observed the project’s first workshop in the fall of 2017.
After watching experts from different disciplines share ideas, Prof. Felten made the following comment:
“(It was) like nothing I’d seen before. There was a vision for what this collaboration could be that really locked into place.”
A joint venture
The project is a joint venture of the University Center for Human Values and CITP. Director Melissa Lane, the Class of 1943 Professor of Politics, said the following regarding the University Center for Human Values:
“(It serves as a) a forum that convenes scholars across the University to address questions of ethics and value in diverse settings.”
Prof. Felten, Princeton’s Robert E. Kahn Professor of Computer Science and Public Affairs, said:
“Our vision is to take ethics seriously as a discipline, as a body of knowledge, and to try to take advantage of what humanity has understood over millennia of thinking about ethics, and apply it to emerging technologies.”
Prof. Felten added that AI systems can be an opportunity to achieve better outcomes with less risk and bias. However, it must be implemented carefully. “It’s important not to see this as an entirely negative situation.”
When machines decide, what happens to accountability?
Ethical knowledge is critical for decisions regarding artificial intelligence technologies. They can affect people’s lives much faster than many previous innovations. They can also affect people on a large scale.
There is a risk that they influence outcomes without sufficient accountability.
Prof. Felten cited the use of AI in making assessments or predictions in the criminal justice system. Such decisions as parole, prison sentencing, and bail have traditionally been made by human beings. Specifically, human judges.
According to a Princeton University press release:
“One major question is whether AI systems should be designed to reproduce the current human decision patterns, even where those decision patterns are known to be profoundly affected by various biases and injustices or should seek to achieve a greater degree of fairness. But then, what is fairness?”
Prof. Lane explained that philosophers have always known that fairness can be viewed in different ways. “But we haven’t always been pressed to work out the implications of committing to one view and operationalizing it in the form of an algorithm. How do we evaluate those choices in a real-life setting?”
Case studies
The project released its initial case studies in May 2018. They are available under a Creative Commons license for public use. Although they are based on real-world situations, how machines decide things has been fictionalized for study purposes.
The case studies look at several applications of AI technologies and the ethical dilemmas that emerged.
The case studies are intended as starting points for conversations on artificial intelligence and AI in classroom settings. They are also starting points for conversations among policymakers and practitioners.
Prof. Lane added:
“We also are very conscious of the society-wide, systemic questions — the questions about monopoly power, the questions about privacy, the questions about governmental regulation.”
The project aims to build a new field of research and practice. Other universities are also collaborating in joint conferences. Stanford and Harvard participated in conferences in the fall of 2018.
What is artificial intelligence?
AI (artificial intelligence) includes software technologies that make machines, such as robots or computers, act like humans. They also make them behave like humans.
Some engineers insist that it is AI only when it performs at least as well as a human. “Perform,” in this context, refers to human computational speed, accuracy, and capacity.
Thanks to AI today, machines decide what happens to people in ways that can affect our future significantly.