What will it take for robots to earn our trust? The WASP NEST-project PerCorSo explores how to make robots safe, reliable, and socially aware, focusing on trust, risk, and timing in human-robot interaction.
“If people don’t perceive robots as safe or appropriate, they will not want to interact with them. We need robots to do dirty, dull and dangerous jobs. The humans that work alongside these robots will need to trust them and perceive them as safe to be able to use them to the best of their potential”, says Jana Tumova, associate professor at KTH, where we visit her lab at the Division of Robotics, Perception and Learning.
Jana Tumova is the main principal investigator (PI) of PerCorSo, along with Iolanda Leite, who is also an associate professor in the same department. The PerCorso project works intensely with various sub-projects covering safety-related aspects like risk, trust and timing in robotics.
Risky business
Within the scope of the PerCorso project, Anna Gautier and Rebecca Stower, both postdoctoral researchers at the Division of Robotics, Perception and Learning at KTH, work together on a project called Risky Business that looks into risk-taking and trust in human-robot interaction.
Stower has a background in psychology and human-robot interaction, while Gautier’s background is in multi-agent systems within mathematics – another example of the truly interdisciplinary nature of the PerCorso project.
“I am especially interested in robot failures. Even though we are making huge technical advances in the robotic systems that we are designing and building, and working with amazing engineers, robots will still fail. It is impossible to guarantee a perfect interaction with a robot”, says Rebecca Stower.
In Risky Business, they did an experiment where a human participant had to cooperate with robot arms to stack a tower of blocks – a simple setting by intent.
The human doing this interaction gets to see two different robots, and one robot they interact with is consistently mediocre at its job. The other robot they interact with is occasionally great and occasionally quite poor, Gautier explains:
“What we are really trying to test is if humans are interested in working with the consistently mediocre robot, or are they willing to take risks with the robot that is occasionally great, but occasionally quite poor?”.
They found differences in how people trusted the robot according to its behavior, and especially they found an interaction between whether people expected the robot to succeed or fail.
“When a robot was consistently performing well, people were more willing to take a risk with it. They trusted it more as opposed to when it was consistently performing lower. Even when this risk might mean that they still have a complete failure in the interaction”, Rebecca Stower says.
Good decisions at the right time
“It is a bit chilly outside, do you have something warm on the menu?”.
At the KTH Division of Robotics, Perception and Learning, Ermanno Bartoli, a PhD student in PerCorso, speaks to a robot called Ari that is acting as the waiter of a restaurant.
It takes some time for the robot to answer and then it gives some suggestions from the menu.
“My idea is that we want robots to do the right things and make good decisions. But I think we also want robots to make good decisions at the right time. Like – when should robots do some specific actions?”, Ermanno Bartoli says.
In another PerCorso project, Ermanno Bartoli investigates how humans perceive different lengths of response time from robots. Everything starts from analysing when two humans speak to each other. They don’t get to wait a long time to get an answer from the other, he explains.
“But when we want robots to be robust and able to answer correctly, they usually have to do heavy computations to come up with an answer, which can take time. So what happens during this interval of time when the human is waiting to get an answer from the robot? Can we speed up this process? And more importantly, is speeding up something beneficial for humans?”, Ermanno Bartoli says.
They compared different conditions and found that if humans had to choose between the fastest and the slowest robot, the interaction was more beneficial when the response time was faster.
“Humans don’t associate a slower response with a ‘good human-robot interaction'”, Ermanno Bartoli concludes.
Safety can be about provable mathematics or feelings
The PerCorso project came to life when Jana Tumova and Iolanda Leite, who come from two very different research backgrounds, started talking. Tumova works in formal methods for robot planning and control, whereas Leite’s background is in social robotics and human-robot interaction, and she has a very human-centric background.
As the project grew, they brought in two other PIs: professors Joakim Gustafson, an expert in multimodal interaction and Patric Jensfelt, an expert in perception – both at KTH.
“When we started talking, we realised that the word safety has very different meanings in our communities. For me, safety is something that I can prove or disprove and mathematically capture as a rigorous thing. For Iolanda Leite, safety may be something like perceived safety”, says Tumova.
There is a huge gap between the provable safety and the perceived safety, and that is what they started exploring together in the PerCorSo project.
“In the biggest possible sense, we want to create socially capable robots that can work with people, exist alongside people, and be there for people. So, it is this human-centric vision that we have in mind. But, we want to do that properly. We want to do that with an understanding of the whole technology behind it, of the math of the rigour. We want to know why our robots work the way they work”, Jana Tumova says.
Interview with Anna Gautier, Rebecca Stower and Ermanno Bartoli
Read more about PerCorSo
Published: December 9th, 2024