Tobias Huber M.Sc.

Research assistant
Chair for Human-Centered Artificial Intelligence
Phone: (+49)(0)821 – 598 2336
Room: 2015 (N)
Address: Universitätsstraße 6a, 86159 Augsburg

Research interest

My research focuses on the explainability of Artificial Intelligence and Reinforcement Learning in particular.

I find it fascinating how Reinforcement Learning algorithms can independently develop strategies based only on observations and rewards. In some cases, the agents even develop new strategies that even humans have not yet considered ( e.g. by the chess computer AlphaZero). However, since only the goal of the agents is defined, it is often not clear what exactly the learned strategies look like. This is exacerbated by the use of modern machine learning techniques, which achieve considerable success but are also very opaque.

The goal of my research is to develop new algorithms that make the behaviour of intelligent agents explainable to users and thus facilitate the cooperation between humans and computers.



Explainable deep Reinforcement Learning. Invited talk in the CSL Machine Learning Reading Club of the Computational Science Lab (CSL) at the University of Hohenheim, 02.02.2021, Slides .


Insights into final projects

You can watch and even play student projects from the game desigen lectures here.


© University of Augsburg

Supervised theses

  • Design und Implementierung einer Virtual-Reality-Umgebung zum Testen des impliziten Lernens von motorischen sequentiellen Bewegungen (Master, 2021, Link)

  • Using Reinforcement Learning to facilitate Implicit Learning in a VR Sports Simulation (Bachelor, 2021)

  • Understanding Subliminal Persuasive Body Language in Political Speeches via ExplainableArtificial Intelligence (Bachelor, 2021)

  • Verbinden von Belohnungs-Zerlegung und Strategie-Zusammenfassung für erklärbares Bestärkendes Lernen (Bachelor, 2021)
  • Exploring an Explainable Reinforcement Learning Design for a Self Learning American Football Simulation (Master, 2020)
  • Implementation and Comparison of Occlusion-based Explainable Artificial Intelligence Methods (Bachelor, 2020, Link)
  • Implementierung und Vergleich verschiedener Salienz-Karten Algorithmen für tiefes bestärkendes Lernen (Master, 2019)

PC Member / Editorials


Huber, Tobias
2022 | 2021 | 2020 | 2019 | 2018


Silvan Mertes, Christina Karle, Tobias Huber, Katharina Weitz, Ruben Schlagowski and Elisabeth André. in press. Alterfactual explanations: the relevance of irrelevance for explaining AI systems. DOI: 10.48550/arXiv.2207.09374
PDF | BibTeX | RIS | DOI

Tobias Huber, Benedikt Limmer and Elisabeth André. 2022. Benchmarking perturbation-based saliency maps for explaining Atari agents. DOI: 10.3389/frai.2022.903875
PDF | BibTeX | RIS | DOI

Pooja Prajod, Dominik Schiller, Tobias Huber and Elisabeth André. 2022. Do deep neural networks forget facial action units? - Exploring the effects of transfer learning in health related facial expression recognition. DOI: 10.1007/978-3-030-93080-6_16
PDF | BibTeX | RIS | DOI

Silvan Mertes, Tobias Huber, Katharina Weitz, Alexander Heimerl and Elisabeth André. 2022. GANterfactual - counterfactual explanations for medical non-experts using generative adversarial learning. DOI: 10.3389/frai.2022.825565
PDF | BibTeX | RIS | DOI


Tobias Huber, Silvan Mertes, Stanislava Rangelova, Simon Flutura and Elisabeth André. 2021. Dynamic difficulty adjustment in virtual reality exergames through experience-driven procedural content generation. DOI: 10.1109/ssci50451.2021.9660086
PDF | BibTeX | RIS | DOI

Tobias Huber, Katharina Weitz, Elisabeth André and Ofra Amir. 2021. Local and global explanations of agent behavior: integrating strategy summaries with saliency maps. DOI: 10.1016/j.artint.2021.103571
PDF | BibTeX | RIS | DOI


Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber and Elisabeth André. 2020. "Let me explain!": exploring the potential of virtual agents in explainable AI interaction design. DOI: 10.1007/s12193-020-00332-0
PDF | BibTeX | RIS | DOI

Simon Flutura, Andreas Seiderer, Tobias Huber, Katharina Weitz, Ilhan Aslan, Ruben Schlagowski, Elisabeth André and Joachim Rathmann. 2020. Interactive machine learning and explainability in mobile classification of forest-aesthetics. DOI: 10.1145/3411170.3411225
PDF | BibTeX | RIS | DOI

Dominik Schiller, Tobias Huber, Michael Dietz and Elisabeth André. 2020. Relevance-based data masking: a model-agnostic transfer learning approach for facial expression recognition. DOI: 10.3389/fcomp.2020.00006
PDF | BibTeX | RIS | DOI

Klaus Weber, Lukas Tinnes, Tobias Huber, Alexander Heimerl, Eva Pohlen, Marc-Leon Reinecker and Elisabeth André. 2020. Towards demystifying subliminal persuasiveness: using XAI-techniques to highlight persuasive markers of public speeches. DOI: 10.1007/978-3-030-51924-7_7
PDF | BibTeX | RIS | DOI


Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber and Elisabeth André. 2019. "Do you trust me?" Increasing user-trust by integrating virtual agents in explainable AI interaction design. DOI: 10.1145/3308532.3329441
PDF | BibTeX | RIS | DOI

Tobias Huber, Dominik Schiller and Elisabeth André. 2019. Enhancing explainability of deep reinforcement learning through selective layer-wise relevance propagation. DOI: 10.1007/978-3-030-30179-8_16
PDF | BibTeX | RIS | DOI

Stanislava Rangelova, Simon Flutura, Tobias Huber, Daniel Motus and Elisabeth André. 2019. Exploration of physiological signals using different locomotion techniques in a VR adventure game. DOI: 10.1007/978-3-030-23560-4_44
PDF | BibTeX | RIS | DOI

Dominik Schiller, Tobias Huber, Florian Lingenfelser, Michael Dietz, Andreas Seiderer and Elisabeth André. 2019. Relevance-based feature masking: improving neural network based whale classification through explainable artificial intelligence. DOI: 10.21437/interspeech.2019-2707
PDF | BibTeX | RIS | DOI


Tobias Huber. 2018. Tiefes bestärkendes Lernen: Grundlagen, Approximationseigenschaft und Implementierung multimodaler Erklärungen.
PDF | BibTeX | RIS