Every April, we take part in Informatics Day (Tag der Informatik). The event is organised by the Institute of Informatics and provides teachers and students from Year 10 upwards with information about study programmes in Informatics at the University of Augsburg.
In 2019, we gave a taster lecture, ‘Earlier detection of illness with the help of artificial intelligence’, and two demonstrations, 'Denoising with neural networks' and 'A graphical visualisation of machine learning'.
Check back nearer the time to find out about presentations by EIHW researchers in 2021.
Impressions from previous years
Challenges & Workshops
INTERSPEECH 2020 Computational Paralinguistics Challenge (ComParE)
Elderly Emotion, Breathing & Masks
The INTERSPEECH 2020 Computational Paralinguistics ChallengE (ComParE) is an open Challenge dealing with states and traits of speakers as manifested in their speech signal’s properties. There have so far been eleven consecutive Challenges at INTERSPEECH since 2009 (cf. the repository), but there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. Thus, we introduce three new tasks by the Elderly Emotion Sub-Challenge, the Breathing Sub-Challenge, and the Mask Sub-Challenge (see also for the data descriptions). For the tasks, the data are provided by the organisers.
The INTERSPEECH 2020 Computational Paralinguistics Challenge (ComParE) shall help bridging the gap between excellent research on paralinguistic information in spoken language and low compatibility of results. The results of the Challenge will be presented at Interspeech 2020 in Shanghai, China. Prizes will be awarded to the Sub-Challenge winners.
ICMI 2018 Eating Analysis & Tracking Challenge
The ICMI 2018 Eating Analysis & Tracking Challenge is an open research competition dealing with Machine Learning for audio/visual tracking of human subjects’ recorded while eating different types of food during speaking.
The challenge features three sub-challenges:
- Food-type Sub-Challenge, Recognition of one of 6 food types (or no food)
- Food-likability Sub-Challenge, Recognition of the subjects’ food likability rating
- Chew and Speak Sub-Challenge, Recognition of the subjects’ eating difficulty
The Audio/Visual Emotion Challenge and Workshop (AVEC 2018)
The Audio/Visual Emotion Challenge and Workshop (AVEC 2018) “Bipolar Disorder and Cross-cultural Affect” is a satellite event of ACM MM 2018 and the eighth competition aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual, and audio-visual health and emotion sensing, with all participants competing under strictly the same conditions.
The goal of the challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, visual and audio-visual affect recognition communities, to compare the relative merits of the approaches to automatic health and emotion analysis under well-defined conditions. Another motivation is the need to advance health and emotion recognition systems to be able to deal with fully naturalistic behaviour in large volumes of un-segmented, non-prototypical and non-preselected data, as this is exactly the type of data that both multimedia and human-machine/human-robot communication interfaces have to face in the real world.
We are calling for teams to participate in three sub-challenges:
- Bipolar Disorder Sub-Challenge, Patients suffering from bipolar disorder – as defined by the DSM-5 – need to be classified into remission, hypo-mania, and mania, from audio-visual recordings of structured interviews (BD corpus); performance is measured by the unweighted average recall (UAR) over the three classes.
- Cross-cultural Emotion Sub-Challenge, Dimensions of emotion need to be predicted time-continuously in a cross-cultural setup (German => Hungarian) from audio-visual data of dyadic interactions (SEWA corpus); performance is the concordance correlation coefficient (CCC) averaged over the dimensions.
- Gold-standard Emotion Sub-Challenge, Individual annotations of dimensional emotions need to be processed to create a single time series of emotion labels termed as “gold-standard”. Performance (CCC) is measured by a baseline system trained and evaluated from multimodal data with the generated gold-standard (RECOLA corpus).
The Affective Social Multimedia Computing 2018 - An ACMMM 2018 Satellite Workshop
Affective social multimedia computing is an emergent research topic for both affective computing and multimedia research communities. Social multimedia is fundamentally changing how we communicate, interact, and collaborate with other people in our daily lives. Comparing with well-organized broadcast news and professionally made videos such as commercials, TV shows, and movies, social multimedia computing imposes great challenges to research communities.
Social multimedia contains much affective information. Effective extraction of affective information from social multimedia can greatly help social multimedia computing (e.g., processing, index, retrieval, and understanding). Although much progress has been made in traditional multimedia research on multimedia content analysis, indexing, and retrieval based on subjective concepts such as emotion, aesthetics, and preference, affective social multimedia computing is a new research area.
The affective social multimedia computing aims to proceed affective information from social multi-media. For massive and heterogeneous social media data, the research requires multidisciplinary understanding of content and perceptional cues from social multimedia. From the multimedia perspective, the research relies on the theoretical and technological findings in affective computing, machine learning, pattern recognition, signal/multimedia processing, computer vision, speech processing, behavior and social psychology.
Affective analysis of social multimedia is attracting growing attention from industry and businesses that provide social networking sites, content-sharing services, distribute and host the media. This workshop focuses on the analysis of affective signals in social multimedia (e.g., twitter, weichat, weibo, youtube, facebook, etc).
The workshop will address, but is not limited to, the following topics:
- Affective human-machine interaction or human-human interaction
- Affective/Emotional content analysis of images, videos, music, metadata (text, symbols, etc.)
- Affective indexing, ranking, and retrieval on big social media data
- Affective computing in social multimedia by multimodal integration (face expression, gesture, posture, speech, text/language)
- Emotional implicit tagging and interactive systems
- User interests and behavior modeling in social multimedia
- Video and image summarization based on affect
- Affective analysis of social media and harvesting the affective response of crowd
- Affective generation in social multimedia, expressive text-to-speech and expressive language translation
- Applications of affective social multimedia computing
Interspeech ComParE 2018 - COMPUTATIONAL PARALINGUISTICS CHALLENGE
The Interspeech 2018 Computational Paralinguistics Challenge (ComParE) is an open challenge dealing with states and traits of speakers as manifested in their speech signal’s acoustic properties. There have so far been nine consecutive Challenges at INTERSPEECH since 2009 (cf. the challenge series‘ repository at http://www.compare.openaudio.eu), but there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. Thus, in this year’s 10th anniversary edition, we introduce four new tasks.
The following sub-challenges are addressed:
- Atypical Affect Sub-Challenge, emotion of disabled speakers is to be recognised.
- Self-Assessed Affect Sub-Challenge, self-assessed affect shall be determined.
- Crying Sub-Challenge, mood-related types of infant vocalisation have to be classified.
- Heart Beats Sub-Challenge, types of Heart Beat Sounds need to be distinguished.