Familiar Acoustic Cues for Legible Service Robots

Georgios Angelopoulos

,

Francesco Vigni

,

Alessandra Rossi

,

Giuseppina Russo

,

Mario Turco

and

Silvia Rossi


roman-roomba

Venue: 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022)

Date: August 31, 2022

Paper

Tags:

Abstract

When navigating in a shared environment, the extent to which robots are able to effectively use signals for coordinating with human behaviors can ameliorate dissatisfaction and increase acceptance. In this paper, we present an online video study to investigate whether familiar acoustic signals can improve the legibility of a robot’s navigation behavior. We collected the responses of 120 participants to evaluate their perceptions of a robot that communicates with one of the three used non-verbal navigational cues (an acoustic signal, an acoustic in pair with a visual signal, and a dissimilar frequency acoustic signal). Our results showed a significant legibility improvement when the robot used both light and acoustic signals to communicate its intentions compared to using only the same acoustic sound. Additionally, our findings highlighted that people also perceived differently the robot’s intentions when they were expressed through two frequencies of the mere sound. The results of this work suggest a paradigm that can help the development of mobile service robots in public spaces.
© Francesco Vigni 2023