What is the best way to understand the strategies of self-learning AI agents?

The Artifical Intelligence Journal recently accepted a scientific article written by our colleagues Tobias Huber, Katharina Weitz and Prof. Dr. Elisabeth André from the Chair of Human-Centered Artificial Intelligence together with researcher Ofra Amir from the Technion Israel Institute of Technology. The journal is one of the most prestigious journals for research on artificial intelligence. Our article, "Local and global explanations of agent behavior: integrating strategy summaries with saliency maps," focuses on Explainable Artificial Intelligence.
This research area investigates how the internal workings of modern AI systems can be made understandable to users. In our research, we focused on agents trained by reinforcement learning algorithms. These agents learn based on rewards they receive after interactions with their environment. It may happen that the reward that such a reinforcing learning agent expects from an action does not occur until some time after this action. This makes the behavior of reinforcing learning agents particularly difficult to comprehend.  
In our paper, we investigate for the first time how a combination of global information about the strategy of a reinforcement learning agent with local explanations about individual decisions of the agent affects users. The study conducted found that the global information about the agent's strategy was more useful to users. The results also suggest new ways to improve the effectiveness of explanations about individual decisions in combination with global information in the future.

Search