Aria-Valuspa: Artificial Retrieval of Information Assistants – Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects


Project start: 01.01.2015
Duration: 3 years
Funded by: EU (Horizon 2020)
Scientifically responsible: Prof. Dr. Elisabeth André
Involved researchers at our lab:  Dr. Tobias Baur
University of Nottingham

About the project

The ARIA-VALUSPA project aims to create a groundbreaking new framework that will make it easy to create Artificial Retrieval of Information Assistants (ARIAs). ARIAs are capable of handling multimodal social interactions in challenging and unexpected situations. During the interaction with a human, the system allows to generate search queries from the dialogue and to return corresponding information through the virtual characters. The characters are able to maintain the conversation with a human for a longer period of time and to react appropriately to the verbal and non-verbal behaviour of the user while presenting the results of the search queries. Video and audio signals are used to record and process both verbal and non-verbal components of human communication. Based on a far-reaching and realistic emotional personality model, a complex dialogue management system decides how to react to user input. Inputs can be speech, nodding of the head or a smile. ARIAs use a special speech synthesis to create a comprehensive, emotional language and an expressive 3D face to underline the chosen answers. Feedback through head nods to signal that the ARIA has understood what the user is saying to him or her, or responding to a smile are just a few of the many ways that ARIAs allow them to display a wide range of emotional social signals to enhance human-agent interaction.