AI Meets Human Data Colloquium

Learning from Language, Vision, and Interaction

The AI Meets Human Data Colloquium at the University of Augsburg takes place for the first time in the winter semester 2025/26.

It is hosted jointly by the Chair for Computational Linguistics (Prof. Annemarie Friedrich), the Chair for Human-Centered Artificial Intelligence (Prof. Dr. Elisabeth André) and the Chair for Machine Learning and Computer Vision (Prof. Dr. Rainer Lienhart).

We cordially invite all members of the University of Augsburg and other interested guests to join our lectures and meet our distinguished external speakers.

logo for AI meets human data colloquium
© Universität Augsburg

Talks

Abstract
Signed languages are fascinating. They are visual-gestural, they are three-dimensional, and they have multiple simultaneous channels of production (left hand, right hand, face, mouth, posture…). They show us that natural languages are not limited to speech and writing. When it comes to addressing them computationally, we have barely scratched the surface.
In my talk I will give an introduction to the current state of computational sign linguistics. What language technologies for sign languages look like and the challenges they face. What goes into creating sign language datasets. Why deaf involvement is so essential and what happens when it is missing. And why German Sign Language has sixteen signs for the month of May.
 
© Stefanie Wetzel
 
Bio
Marc Schulder is a research associate at the Institute of German Sign Language and Communication of the Deaf (IDGS) at the University of Hamburg. His research interests centre on the creation of resources and technologies for signed languages that will aid deaf communities and sign linguistics research. After completing his PhD in computational linguistics in 2019, he joined IDGS to work in the DGS-Korpus project (2009–2027), home of the largest corpus of natural dialogues in German Sign Language (DGS). He is also currently part of the EU project VISTA-SL (2025–2027), an education platform for hearing and deaf second language learners, and supports the Priority Program ViCom (2022–2028) as open data consultant. Previously he was also involved with the EU project EASIER (2021–2023) on machine translation between European signed and spoken languages, and collaborated with the GeSi project (2023–2025) to investigate head nods in signed and spoken languages. Marc is hearing and an L2 learner of DGS.
 
 

 

Abstract

In this talk, I will present an overview of our long-standing research on digital twinning for complex indoor environments. I will trace the development of our indoor mapping systems—from early research prototypes to robust commercial solutions—and show how these technologies enable accurate, scalable, and efficient digitization of real-world spaces. Building on this foundation, I will introduce our recent advances in semantic segmentation, data completion, and 3D visual grounding. These methods transform raw, unstructured 3D scans into semantically enriched, interactive digital twins that support a wide range of downstream tasks. I will further highlight the role of digital twins in 6G radio-frequency propagation simulation, discuss how radar sensing can enhance digital twin fidelity, and outline how haptic sensing solutions can complement visual data by providing material-level information.

 

 

 

© Herdergott

 

 

 

Bio

Eckehard Steinbach is a Professor at the Technical University of Munich (TUM), where he leads the Chair of Media Technology. His research spans visual and haptic information processing, machine learning, and robot perception. Over the past two decades, his group has made significant contributions to large-scale indoor mapping and digital twinning, with core research results successfully transferred into commercial solutions. He is also a co-founder and Chief Scientist of Olive Robotics GmbH. Prof. Steinbach is a Fellow of the IEEE.

 

 

 

 

Abstract
Emotions are often considered reactions to events, which are then appraised by people who live through this event. At the same time, emotions also constitute events. This raises two perspectives on events and emotions. One is language-processing specific and builds on top of semantic role labeling, namely emotion role labeling, where the task is to detect who is described to feel an emotion, which emotion that is, and why. The other requires the interpretation of events. Here, I discuss the computational use of appraisal theories that explain the parameters that lead to a particular emotion. The use cases include argument and multimodal social media post interpretations.

 

© Benjamin Herges

 

Bio

Roman Klinger is a professor at the Faculty for Information Systems and Applied Computer Science (WIAI) at the University of Bamberg. He studied computer science with a minor in psychology, holds a Ph.D. in computer science from TU Dortmund University (2011), and received the venia legendi in computer science in Stuttgart (2020). Before moving to Bamberg, he worked at the Institute for Natural Language Processing in Stuttgart, at the University of Bielefeld, at the Fraunhofer Institute for Algorithms and Scientific Computing, and the University of Massachusetts Amherst. Roman Klinger’s vision is to enable computers to understand and generate text regarding both propositional and non-propositional information. This finds application in interdisciplinary research, including biomedical text mining, digital humanities, modelling psychological concepts (like emotions) in language, and social media mining. These topics often constitute novel challenges to existing machine learning methods. Therefore, he and his group also contribute to the fields of probabilistic and deep machine learning.

 

Organizers

Prof. Dr. Annemarie Friedrich
Lehrstuhlinhaberin
Lehrstuhl für Computerlinguistik
  • Telefon: +49 821 598 4628
  • E-Mail:
  • Raum 1022 (Gebäude BCM)
Lehrstuhlinhaberin
Lehrstuhl für Menschzentrierte Künstliche Intelligenz
  • Telefon: +49 821 598 - 2340
  • E-Mail:
  • Raum 2035 (Gebäude N)
Lehrstuhlinhaber
Lehrstuhl für Maschinelles Lernen und Maschinelles Sehen
  • Telefon: +49 (821) 598 5703
  • E-Mail:
  • Raum 1013 (Gebäude N)

Suche