South Coast Graduate Student Creates "Singing Robot" To Study Robot-Human Communication

Oct 12, 2018

Whether you use iPhone’s Siri, Amazon’s Alexa or a Roomba vacuum, robots are becoming an everyday part of our lives. As technology advances, the reliance on robots will be even more commonplace. That’s why a South Coast graduate student is trying to improve robot-human communication.

Meet ROVER. It's a six-foot tall robot that uses heat sensors to detect people and rolls toward them. Then, ROVER sings a song. The robot was built by Hannah Wolfe, a doctoral student in the Media, Arts and Technology Program at UC Santa Barbara.

“I was looking at human-robot interaction and how we can communicate emotions and ideas nonverbally with technology,” she said.

Wolfe said such communication will become increasingly important.

“We’re going to be interacting with this technology, and we need to figure out a way that we can interact with it in a fluid, easy way. We want it to be a pleasant experience. And, we want it be a user-friendly experience. We want to make people’s lives better,” she said.

Wolfe created an algorithm that generates both happy and sad sounds with a mix of beeps and chirps. In her study, she analyzed people’s reactions to ROVER making these sounds.

“I think the happy sound definitely made me feel happy. It made me smile. The sad sound – I’m not sure I interpreted as sad – but it did make me feel a little bit more calm than the happy sound. So, it worked a bit for me,” said Sara Lafia, a UCSB graduate student studying geography.

And, Solen Kiratli is a graduate student in UCSB’s Media, Arts and Technology Program.

“I didn’t get very much affected by the sad sounds, I must say. But the happy sounds were a bit uplifting,” she said.

While neither Lafia nor Kiratli participated in the study, their reactions were similar to Wolfe’s findings. Happier sounds made people feel happier.

“You can have robots communicate emotion through sound without words, and that is a valid interaction," Wolfe said.

For Lafia, it was a fascinating encounter.

“When it sings to me, I feel like I’m looking at it, but I’m not sure where to look. I’m not sure how to quite interact with it. It’s just an interesting experience for me as somebody just standing in the room to not really know where to look at it but feeling an emotional reaction to it,” she said.

Having that “emotional reaction” is exactly what Wolfe was hoping for. It shows that true communication was happening between a human and a robot. She says now the focus is on fine-tuning the robot sounds as a way to effectively communicate.

“A happy sound could mean ‘yes’ or ‘I agree with you.’ Or, a sad sound might mean ‘Don’t do that.’ Maybe you have a sound that means ‘Get out of my way’ and how do you communicate those things through sound. That’s where I’m looking at,” Wolfe said.

UCSB professor JoAnn Kuchera-Morin, who’s her faculty advisor, says Wolfe is on the right track.

“We want to be able to approach communicating with it in such a way that it’s believable that we can actually communicate with this technology,” she said.