De-escalating robocops? York study imagines future of crisis response 

Robotic hand reaches for human hand

By Corey Allen, senior manager, research communications

Picture this: a 911 operator in your city receives a call from a person in mental distress and needs to send help.  

They could dispatch the police or an integrated unit of both police and mental health professionals. But instead, the operator sends a robot.  

This scenario may sound like science fiction, but it’s the kind of futuristic thinking that has researchers at York University considering all angles when it comes to artificial intelligence (AI) and crisis response.   

Building more empathetic bots through interdisciplinary research  
Kathryn Pierce
Kathryn Pierce

In a paper published in Applied Sciences earlier this year, psychology PhD candidate Kathryn Pierce and her co-authors explore the potential role robots could play in crisis de-escalation, as well as the capabilities engineers would need to program them to be effective.    

The visionary paper is part of a larger project at the Lassonde School of Engineering that involves early-stage research to design and test robots to assist in security and police force tasks. The York engineers asked the psychology researchers to provide their social scientific lens to their forward-thinking work on humanizing machines.  

“De-escalation is not a well-researched topic and very little literature exists about what de-escalation really looks like moment by moment,” says Pierce, who is supervised by Dr. Debra Pepler, a renowned psychologist and Distinguished Research Professor in the Faculty of Health. “This makes it difficult to determine what kinds of behavioural changes are necessary in both responders and the person in crisis to lead to a more positive outcome.”   

No hard and fast rules for de-escalation, for both humans and robots  

With limited academic understanding of what really happens in human-to-human interactions during a crisis response, let alone robot-to-human, training a robot to calm a person down poses an incredibly tall task.  

Despite the challenge, Pierce and her co-authors were able to develop a preliminary model outlining the functions a robot should theoretically be able to perform for effective de-escalation. These functions are made up of verbal and non-verbal communication strategies that engineers would need to be mindful of when building a robot for such a task.    

Some of these strategies include a robot’s gaze – the way a machine and human look at one another – the speed in which they approach (slow and predictable), and the sound and tone of their voice (empathetic and warm).  

But, as the researchers point out, ultimately, robots cannot be “programmed in a fixed, algorithmic, rule-based manner” because there are no fixed rules for how people calm each other.   

“Even if there were algorithms governing human-to-human de-escalation, whether those would translate into an effective robot-to-human de-escalation is an empirical question,” they write.  

It is also difficult to determine whether people will react to robots emulating human behaviour the same way they would if it was an actual person. 

Advances in AI could add new layer of complication to the future of crisis response  

In recent years, the use and discussion of non-police crisis response services have garnered growing attention in various cities across North America, and elsewhere in the world.  

Advocates for replacing traditional law enforcement with social workers, nurses or mental health workers – or at least the integration of these professionals with police units – argue that this leads to better outcomes.  

Research published earlier this year showed that police responding to people in mental distress use less force if accompanied by a health-care provider. Another study found that community responses were more effective for crime prevention and cost savings.  

Introducing robots into the mix would add to the complexity of crisis response services design and reforms. And it could lead to a whole host of issues for engineers, social scientists and governments to grapple with in the future. 

The here and now 

For the time being, Pierce and her co-authors see a machine’s greatest potential in video recording. Robots would accompany human responders on calls to film the interaction. The footage could then be reviewed for responders to reflect on what went well and what to improve upon.  

Researchers could also use this data to train robots to de-escalate situations more like their human counterparts.    

Another use for AI surveillance the researchers theorize could be to have robots trained to identify individuals in public who are exhibiting warning signs of agitation, allowing for police or mental health professionals to intervene before a crisis point is ever reached.  

While a world in which a 911 operator dispatches an autonomous robot to a crisis call may be too hard to conceive, Pierce and her co-authors do see a more immediate, realistic line of inquiry for this emerging area of research.  

“I think what’s most practical would be to have engineers direct their focus on how robots can ultimately assist in de-escalation, rather than aiming for them to act independently,” says Pierce. “It’s a testament to the power and sophistication of the human mind that our emotions are hard to replicate. What our paper ultimately shows, or reaffirms, is that modern machines are still no match for human intricacies.”  

Background  

The paper, “Considerations for Developing Robot-Assisted Crisis De-Escalation Practice,” was co-authored by Pierce and Pepler, along with Michael Jenkin, a professor of electrical engineering and computer science in the Lassonde School of Engineering, and Stephanie Craig, an assistant professor of psychology at the University of Guelph.  

The work was funded by the Canadian Innovation for Defence Excellence & Security Innovation Networks.