Make way for the first existential robot

3d rendering android robot thinking in office

You knew this day would come: The day when your robot, an intelligent and self-aware machine, starts thinking independently from you; begins to question, or even resent, the morality that it has been trained to execute; starts to think of itself in first person … What then?

We could be responsible for creating a new type of thinking thing that feels itself morally constrained
We could be responsible for creating a new type of thinking thing that feels itself morally constrained

Hired in 2017 from New York University (NYU), Professor Regina Rini, in the Faculty of Liberal Arts & Professional Studies, has already worked her way around the questions that would keep most people awake at night.

In this Q&A with Brainstorm, Rini paints a vivid picture of how we might create a ‘good’ robot and what could go horribly wrong. Few people are better equipped to tackle these quintessential questions. Rini, a member of Vision: Science to Application (VISTA), represents the next generation of AI leaders at York University.

Regina Rini
Regina Rini

Q: If intelligent machines or robots could make choices, could these choices be moral? And if so, how does a machine learn morality?

A: You can either approach this as: our morality is the same as a machine’s morality. Then it becomes a technical problem: How do you train robots to do what we would do?

The other way of thinking is: Since they’re not us, we need to reverse the question. Is there some other approach to machine ethics that would not be true, necessarily, for us? That’s the starting point. And I don’t think there’s an obvious conclusion.

Right now, it’s not too late to be asking these questions. However, at some future point, I suspect that the development of AI is going to be so quick that it will be too late. The machines will be making the decisions themselves, or the engineers, competing against other companies or governments, won’t stop to think about the ethics anymore.

“Robots may think of people as replaceable. It wouldn’t matter if one resident in a seniors’ home dies, so long as someone else, who’s vaguely similar, is there to replace them.” – Regina Rini

Q: You’ve said that our morality is rooted in the fact that we are born, we procreate and we die. Intelligent machines could, technically, exist forever. How, then, would morality differ between robots and humans?

A: It’s not clear that death would be the same thing for robots as it is for us. It’s possible that robots will not regard their running of the program as “life” or “existence.” They might think: “I am one of the copies of this program.” But it won’t matter if the copy survives, so long as the program does.

If this comes to pass  ̶  and I am speculating  ̶  then robots won’t care about the preservation of any one entity. They may think of people as replaceable. For example, it wouldn’t matter if one resident in a seniors’ home dies, so long as someone else, who’s vaguely similar, is there to replace them.

This depends on if we train robots to model their thinking around what we care about  ̶  that is, the idea that each person is special and irreplaceable. We would need to build this into how they learn to think.

The development of AI could be so quick that it will be too late. The machines will be making the decisions themselves.
The development of AI could be so quick that it will be too late. The machines will be making the decisions themselves.

Q: When will robots be self-aware?

A: At some point in the next 50 years, we’re going to reach a point where we’re regularly interacting with computer programs that seem self-aware to us, but we’re not going to be sure. Think about Siri, but a lot smarter: Where Siri can have a chat or even joke with you, where Siri seems to give you consistent answers to questions, where Siri seems to have preferences, where Siri passes the Turing Test. English computer scientist, mathematician, cryptanalyst and philosopher, Alan Turing developed a test: If you can’t tell the difference between a computer program and a person in conversation, then the computer program counts as intelligent, or like a person. It can pass as a person.

Alan Turning. Credit NPL/Science Museum (UK)
Alan Turning. Credit NPL/Science Museum (UK)

I believe we’re going to reach a point where our phones regularly pass the Turing Test. But at that point, I suspect we’re going to say, “They’re just phones. They’re not really aware. It can’t actually be a person. It’s just cleaver natural language processing.” It would be very hard for us to reach a different conclusion, to think of them as anything but our tools, after having used these personal assistants to do our bidding for decades.

This raises an interesting question: When we regularly confront machines that pass the Turing Test, will we revisit the test?

Q: Please explain your statement: “If we’re getting it right, robots should be like us. If we’re getting it wrong, they should be better.”

A: According to some philosophers, such as Princeton University’s Peter Singer, we are limited, morally speaking. We only care about people close to us, our family, our children, people from our own country, those who are of a similar socio-economic class to us. That’s morally criticisable, even though we are biologically programmed, through evolution, to protect our children.

If you agree that this is a mistake, and we should be caring for all children not just our own offspring, then we could program robots to be better than us in this way. Robots could be free of this restriction.

Q: How could this go terribly wrong?

A: Following Singer’s argument, we need to consider what if, for example, our robocar must decide between killing other people’s children in the street or swerving into a ditch to avoid this, but in the process killing our own children in the car? The robocar may select the latter because it would not prioritize our own children or the owner of the car.

I don’t think we’re going to allow that to happen. Robocars will follow a consumer choice model wherein if you buy a robocar, you can claim, “This car is going to privilege me. It’s not going to sacrifice me and my family to protect other people. I’m not buying a robocar if it will kill me to save other people.” This, however, assumes that we can control them. We may get to a point where we can’t keep track of their activities or understand their choices.

The robocar debate has now entered public discussion among regulators and car company executives.

“VISTA is a wonderful forum to launch AI discussions. Through York’s established openness to have these conversations and the resources of the city, this University is a well-situated environment for AI.” – Regina Rini

Q: You have said that the first existential robot will suffer. Please explain.

Robots may one day think: This moral code, imposed on me by these creatures that aren’t like me, was primarily designed to serve them. Maybe that doesn’t fit me?
Robots may one day think: This moral code, imposed on me by these creatures that aren’t like me, was primarily designed to serve them. Maybe that doesn’t fit me?

A: If we try to rigidly control robots where they are confined to our conception of morality, and do exactly what we would do, and if they develop some self-awareness, then they will think:”This moral code, imposed on me by these creatures that aren’t like me, was primarily designed to serve them. Maybe that doesn’t fit me?”

My worry is not the science fiction worry where the robots rebel and kill us all. My worry is about what it would be like to be that “person” [self-aware robot] whose thinking has been constrained such that they cannot deviate from what people want them to do. This would be a source of pain. If this happens, we would be responsible for creating a new type of thinking thing that feels itself constrained. This is a real problem that we should try to avoid.

Q: What is York’s contribution to the AI discussion?

A: At York, there’s a great deal of interest in AI and, in particular, related to the social side of AI. Asking questions like: What are the right ethical choices for artificial minds? How will people react? What are the legal and economic implications?

I’ve experienced great willingness to have these kinds of conversations at York. VISTA is a wonderful forum to launch these discussions. That’s very promising, especially given Toronto’s status as a centre for AI research and foundational work on machine learning.

Through York’s established openness to have these conversations and the resources of the city, this University is a well-situated environment for AI.

For more information on Rini’s work, visit her faculty profile. Her award-winning 2017 essay is called “Raising Good Robots.”

To learn more about Research & Innovation at York, follow us at @YUResearch, watch the York Research Impact Story and see the snapshot infographic.

By Megan Mueller, manager, research communications, Office of the Vice-President Research & Innovation, York University, muellerm@yorku.ca