AI and the Future of Radiology
AI and the Future of Radiology
Lecturer: Prof. Daniel Rückert
Date: 8.12.2021, 16:00-17:00
Location: Hybrid, if you wish to join in person please contact Mrs. Nadine Witek
Abstract: Artificial Intelligence (AI) is changing many fields across science and across our society. In this talk, we will discuss how AI is and will change medicine and healthcare, in particular in the field of radiology. In particular, I will focus on how AI can support the early detection of diseases in medical imaging as well as help with improved diagnosis and personalised therapies. I will a.so describe how deep learning can be used for the reconstruction of medical images from undersampled data, image also super-resolution, image segmentation and image classification in the context of cardiac, fetal and neuroimaging. Furthermore, we will discuss how AI solutions can be privacy-preserving while also providing trustworthy and explainable solutions for clinicians. Finally, I will discuss future developments and challenges for AI in radiology and medicine more generally.
Bio: Daniel Rueckert is Alexander von Humboldt Professor for AI in Medicine and Healthcare at the Technical University of Munich. He is also Professor of Visual Information Processing in the Department of Computing at Imperial College London where he served as Head of the Department of Computing. He has gained a MSc from Technical University Berlin in 1993, a PhD from Imperial College in 1997, followed by a post-doc at King’s College London. In 1999 he joined Imperial College as a Lecturer, becoming Senior Lecturer in 2003 and full Professor in 2005. He has published more than 500 journal and conference articles with over 60,000 citations (h-index 113). He has graduated over 50 PhD students and supervised over 40 post-docs. Professor Rueckert has been awarded an ERC Synergy Grant (2013) and an ERC Advanced Grant (2020). He is an associate editor of IEEE Transactions on Medical Imaging, a member of the editorial board of Medical Image Analysis, Image & Vision Computing, MICCAI/Elsevier Book Series, and a referee for a number of international medical imaging journals and conferences. He has served as a member of organising and programme committees at numerous conferences, e.g. he has been General Co-chair of MMBIA 2006 and FIMH 2013 as well as Programme Co-Chair of MICCAI 2009, ISBI 2012 and WBIR 2012. In 2014, he has been elected as a Fellow of the MICCAI society and in 2015 He was elected as a Fellow of the Royal Academy of Engineering and as fellow of the IEEE. More recently he has been elected as Fellow of the Academy of Medical Sciences (2019) and as fellow of the American Institute for Medical and Biological Engineering (2021).
Thesis Presentations November 2021
Date: 30.11.2021, 15:00
15:00 - 15:30
Deep Learning-Based Speech Emotion Recognition: A Cross-Cultural Analysis for Children with Autistic Spectrum Condition
Date: 16.11.2021, 14:00
14:00 - 14:15
Deep CNN-based Encoder-Decoder Networks for Skin Cancer Segmentation
14:20 - 14:35
Automatic Skin Lesion Segmentation and Classification Utilising Deep Attention-Based Neural Networks
14:40 - 14:55
A U-Net-Like Architecture for Automatic Polyp Segmentation
Statistics and Machine Learning for Speech Enhancement
Lecturer: Prof. Dr.-Ing. Timo Gerkmann
Date: 26. Oktober 2021, 14:00
Abstract: Speech Signal Processing is an exciting research field with many applications such as Hearing Devices, Telephony and Smart Speakers. While in noisy environments the performance of these devices may be limited, leveraging modern Machine Learning techniques has recently shown impressive improvements in performance for the estimation of clean speech signals from noisy microphone signals. Yet, in order to to build real-time, robust and interpretable algorithms, those machine learning techniques need to be combined with domain knowledge in signal processing, statistics and acoustics. In this talk, we will present recent research results from our group that follow this perspective by exploiting end-to-end learning, multichannel configurations and deep generative models.
Bio: Timo Gerkmann studied Electrical Engineering and Information Sciences at the universities of Bremen and Bochum, Germany. He received his Dipl.-Ing. degree in 2004 and his Dr.-Ing. degree in 2010 both in Electrical Engineering and Information Sciences from the Ruhr-Universität Bochum, Bochum, Germany. In 2005, he spent six months with Siemens Corporate Research in Princeton, NJ, USA. During 2010 to 2011 Dr. Gerkmann was a postdoctoral researcher at the Sound and Image Processing Lab at the Royal Institute of Technology (KTH), Stockholm, Sweden. From 2011 to 2015 he was a professor for Speech Signal Processing at the Universität Oldenburg, Oldenburg, Germany. During 2015 to 2016 he was a Principal Scientist for Audio & Acoustics at Technicolor Research & Innovation in Hanover, Germany. Since 2016 he is a professor for Signal Processing at the University of Hamburg, Germany. His research interests are on statistical signal processing and machine learning algorithms for speech and audio applied to communication devices, hearing instruments, audio-visual media, and human-machine interfaces. Timo Gerkmann serves as an elected member of the IEEE Signal Processing Society Technical Committee on Audio and Acoustic Signal Processing and as an Associate Editor of the IEEE/ACM Transactions on Audio, Speech and Language Processing.
Bachelor Thesis Presentations October 2021
26. Oktober 2021, 13:30: Christian Grashei: Automatic Skin Lesion Segmentation and Classification Utilising Deep CNN-Based Transfer Learning
13. Oktober 2021, 13:30: Yuliia Oksymets: Deep Learning-Based Localisation of Adherent Cells on Microscope Images
Nondeterministic Temporal Regression for Apparent Affect Recognition
Lecturer: Mani Kumar
Time: 19. Oktober 2021, 14:00
Abstract: Recognising apparent affect or emotion from audiovisual signals in naturalistic conditions remains an open problem. Considering the inherently ambiguous nature of affect label annotation process, we hypothesise that deterministic temporal regression models used in existing methods may have limited generalisation capacity for this task. To evaluate this hypothesis, we explore two closely related nondeterministic temporal regression approaches: stochasticity-aware and uncertainty-aware.
In the former approach, we presume that the relation between audiovisual signals and their apparent emotion labels is governed by some underlying stochastic process. By treating each training sequence as an instance drawn from the underlying stochastic process, we build on a neural latent variable model, Neural Processes, for learning distributions of temporal functions. The experimental results on in-the-wild audiovisual datasets show that stochasticity-aware regression models can generalise much better than the deterministic regression models based on RNNs and self-attention.
For uncertainty-aware temporal regression of apparent emotions from video data, we quantify epistemic (model) and aleatoric (data) uncertainties using temporal Monte Carlo dropout and predictive distribution modelling techniques respectively. By evaluating the quantified emotion prediction uncertainties on apparent personality recognition task, we show that inclusion of fused epistemic and aleatoric emotion uncertainties can significantly improve the downstream task generalisation performance.
Bio: Mani Kumar is currently a PhD student in the Computer Vision Laboratory at the University of Nottingham, under the supervision of Prof. Michel Valstar. He received his Bachelor's degree in Electronics and Communication Engineering from IIIT-RGUKT RK Valley, India. His main research interests include audiovisual affect recognition with a focus on labelled data-efficient representation learning and non-deterministic regression models.
Digital Mental Health: Efficacy, reach, implementation models, mechanisms of change and future developments
Lecturer: Prof. Harald Baumeister
Time: 12. Oktober 2021, 14:00
Professor Baumeister is the Head of the Department of Clinical Psychology and Psychotherapy, Institute for Psychology and Education, Ulm University.
Time: 13 July 2021, 09:00AM-11:05AM
9:00 Luis Palomilu: Federated Learning with Differential Privacy for Web-based Audio Analysis (B.Sc.)
9:25 Petar Petrov: Beyond First-Order Optimisation for Deep Learning-Based Audio Tasks (B.Sc.)
9:50 Moritz Berghofer: Adapting Deep Learning Classifiers for Online Personalisation - Robust Gesture and Activity Recognition on Consumer Devices (M.Sc.)
10:15 Fabian Deuser A Framework for Self-Supervised Learning Utilising Transformers for Audio Signal Processing (M.Sc.)
10:40 Nguyen Vo Cong Khoa Robust Deep Embedding Clustering for Language Identification of Crowd-sourced Audio Recordings (B.Sc.)
Seminar Talks SS21
Time: 14 July 2021, 09:00AM - 02:00PM
09:00 Sebastian Anton Vater
09:15 Daniel Bovan
09:30 Fabian Brain
09:45 Michael Kleiner
10:00 Karina Maria Sindermann
10:30 Tobias Quirin Artz
10:45 Markus Beirit
11:00 Cosima Birkmaier
11:15 Lorenz Konstantin Glück
11:30 Felix Grabowski
11:45 Patrick Hopf
12:00 Duc Viet Phan
12:30 Daniel Lukas Rothenpieler
12:45 Stefanie Veronika Schaller
13:00 Onurcan Ünsal
13:15 Thomas Sebastian Wagner
13:30 Markus Kuspa
13:45 Simone Pompe
Automated Anomaly Detection for Medical Image Analysis
Speaker: Dr. Bernhard Kainz
Time: 14:30, 01 July 2021
An ultimate goal of our research is to develop computational methods for improving image-based detection and diagnosis of disease. This is most effectively done before symptoms arise, ideally starting from the earliest stages of life. In medicine, this approach is known as health screening. Almost everybody gets involved in screening programmes, most commonly when expecting parents are invited for fetal ultrasound examinations. In contrast to disease-focused medical imaging applications, screening requires methods that can detect any pathology as anomaly, i.e., deviation from the norm, without being explicitly trained on the uncountable number of possibilities while being robust towards a high volume of patients and inexperienced operators. Supervised learning of every possible pathology is unrealistic for many primary care applications like health screening. Image anomaly detection methods that learn normal appearance from only healthy data have shown promising results recently. In this talk, I will explore existing image reconstruction and image embedding-based methods and a novel alternative founded on cross-distributional image editing, a new self-supervised method to tackle automated pathological anomaly detection.
Dr Bernhard Kainz is a Senior Lecturer in the Department of Computing at Imperial College London, where is also head of the human-in-the-loop computing group and one of the four academics leading the Biomedical Image Analysis, BioMedIA collaboratory. Bernhard has co-created intensively with King's College London’s School of Biomedical Engineering & Imaging Sciences, St. Thomas Hospital, and the Department of Bioengineering at Imperial College London. He is an academic lead for the AI-enabled Imaging and the Affordable Imaging themes in the CDT in Smart Medical Imaging and is also involved in the UKRI Centre for Doctoral Training in Artificial Intelligence for Healthcare. Bernhard is a scientific advisor for ThinkSono Ltd and his research is about interactive algorithms in healthcare, especially Medical Imaging. He has been working on self-driving medical image acquisition that can guide human operators in real-time during diagnostics.
EmoMatchSpanishDB: Study of Speech Emotion Recognition Machine Learning Models in a New Spanish
Speaker: Antonio Barba, PhD
Time: 14:00, 29 June 2021
It is been created a public emotion elicited database that is composed by fifty non-actors expressing the Ekman’s six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) and a neutral tone in Spanish language. This presentation describes how this database has been created and the performed crowdsourcing perception test to statistically validate the emotion of each collected audio sample. Moreover, I will show a baseline comparative study between different machine learning techniques in terms of accuracy and F1, that make use of prosodic and spectral audio features. I expect this database will be useful to the research community to get new insights within this area of study.
Associate professor at the Science, Computing and Technology department at the Universidad Europea de Madrid (UEM). I got my PhD in the branch of multidisciplinary engineering and I am member of the research group “Data Science Laboratory”.
My research area is framed in environmental acoustic, machine learning and affective computing.
I has published papers related with environmental impact assessment and performed several conferences in national congresses.
Challenges and new directions of preventing mental illness (in the offspring of parents with depression)
Speaker: Dr. Johanna Löchner
Time: 11:00, 07 May 2021
Depression is one of the most common disorders worldwide, causing great personal burden and costs at the societal level. Children of parents suffering from depression are one of the largest risk groups for mental illness. However, there is a lack of research and services for prevention, especially for this high-risk group. The few prevention programs that do exist and have been evaluated generally show small to moderate effects that diminish over time. Therefore, it is questionable how to conduct and implement effective prevention programs for this high-risk group.
A parallel randomized controlled trial evaluated the effectiveness of the German version of the Family Group Cognitive Behavioral Intervention (FGCB): Families with i) a depressed parent and ii) a healthy child aged 8-17 years (mean age = 11.63 years; 53% female) were randomly assigned (block-wise; stratified by child age and parental depression) to the 12-session intervention (EG; n = 50) or no intervention (CG; usual care; n=50). We hypothesized that the CG children would show greater increases in self-reported symptoms of depression and internalizing and externalizing disorder over time than the EG. In addition, potential mechanisms of change were examined (e.g., emotion regulation, attributional style, knowledge of depression, and parenting style). We found significant intervention effects on self-reported internalizing and externalizing symptoms but not on depressive symptoms or parent-reported psychopathology. Although uptake of the intervention was high, parents and children reported being stressed by the number of hours and content invested. Thus, a key question is how to balance behavior change with personal investment. Digital offers for measurement and psychotherapeutic interventions could be a bridge here - also in prevention.
Dr. Johanna Löchner is a clinical psychologist and licensed psychotherapist and is currently the head of the Early Intervention Group at the German Youth Institute (DJI) focusing on research of prevention programs for families with children aged 0-3 funded by the family ministry. Before this, she was employed as a post-doctoral research fellow at the department of Psychology and Psychotherapy at LMU (2018-2020) and the Clinic for Child and Adolescent Psychiatry (LMU) (2014-2020). She finalized her dissertation about risk factors and prevention in the children of parents with depression in 2018. Her research focused on the transmission of mental disorders, the prevention of depression in children of mentally ill parents and adolescents and young adults, face-to-face and with digital solutions.
Psychotherapeutic work: Since 2012 she has been working as a psychotherapist in the university hospitals of LMU and TU Munich and in the AVM outpatient center with adults and children with different psychiatric disorders (Cognitive Behavioral Therapy, license psychotherapy/Approbation 2019).
Learning-Based Quality of Service Prediction in Cellular Vehicle Communication
Guest Speaker: Josef Schmid
Time: 19. April 2021, 14:00
At the moment nearly all automotive manufactures as well as a lot of newcomers like Google and Tesla are working in the area of automated driving. Since today’s automated driving solution are based on onboard sensor technologies like radar, laser or camera system, they are limited in the observing range of about 250 m in front of the vehicle. In case of need for transfer of the driving task from automated mode to the driver, drivers will need some time to react. Therefore the vehicle needs an extended observing area which can be achieved by communication. A common approach for such a communication is to use a mobile network connection. But due to temporary lacks of radio coverage the mobile networks link e.g. LTE or 5G are not as stable as needed. To improve the quality of service of the mobile network it is a key objective to analyse the behaviour of the mobile network at certain driving scenarios. The presentation introduces a method on how to record, collect and analyse such a communication. In addition two different approaches to predict the state of the mobile connection are shown. The first is geo based solution using connectivity maps, the second using different machine learning regression models to achieve this goal.
Josef Schmid received his B. Eng. in 2014 and his M. Sc. in Applied Research in Engineering Sciences in March 2016 at the OTH Amberg-Weiden. During his master studies, he started working as research associate at the Faculty of Electrical, Information and Media at the OTH Amberg-Weiden. Since Mai 2016 his research focus is on mobile network based vehicle communication for cooperative highly automated driving. His main research interests are vehicle to X communication (V2X) as well as quality of service for mobile network and machine learning methods.
Student presentations, February
Audiovisual Data-Driven Android App for Emotion Recognition (Master’s thesis)
Speaker: Qiang Chang
Time: 10:00, 9th February 2021
Implementierung einer Android Applikation für die Klassifikation von Schnarchdaten mittels neuronaler Netze (Bachelor’s thesis)
Speaker: Igor Tkatschenko
Time: 10:00, 9th February 2021
Seminar presentations, 10.02.2021, online
09:30 Marius Pleyer
09:45 Stefan Crummenauer
10:00 Fabian Brain
10:15 Qiang Chang
10:30 Benjamin Jin
10:45 Bernhard Scherer
11:00 Frederic Schulz
11:30 Michael Ihrler
11:45 Francois Lux
12:00 Daniel Schubert
12:15 Reinhard Seidl
12:30 Lena Holland
12:45 Sarah Sporck
Thesis Presentations, January
Deep Learning Annotation Optimisation for Emotion Recognition (Master’s thesis)
Speaker: Lea Schumann
Time: 11:00, 15th January 2021
Author-centric Machine Reviewing of Papers for Deep Learning Utilising Natural Language Feature (Master’s thesis)
Speaker: Philip Müller
Time: 11:00, 15th January 2021
Automated Detection and Classification of Airborne Pollen Grains Using Deep Learning (Master’s thesis)
Speaker: Jakob Schäfer
Time: 11:00, 15th January 2021
Towards End-to-End Intrusion Detection Utilising Convolutional Recurrent Neural Networks (Bachelor’s thesis)
Speaker: Tobias Hallmen
Time: 15. Januar 2021, 11:00
Learning with known operators reduces maximum error bounds
Guest Speaker: Prof. Andreas Maier
Time: 12. Januar 2021, 14:00
We describe an approach for incorporating prior knowledge into machine learning algorithms. We aim at applications in physics and signal processing in which we know that certain operations must be embedded into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards its inputs is suited for our framework. We derive a maximal error bound for deep nets that demonstrates that inclusion of prior knowledge results in its reduction. Furthermore, we show experimentally that known operators reduce the number of free parameters. We apply this approach to various tasks ranging from computed tomography image reconstruction over vessel segmentation to the derivation of previously unknown imaging algorithms. As such, the concept is widely applicable for many researchers in physics, imaging and signal processing. We assume that our analysis will support further investigation of known operators in other fields of physics, imaging and signal processing.
Prof. Dr. Andreas Maier was born on 26th of November 1980 in Erlangen. He studied Computer Science, graduated in 2005, and received his PhD in 2009. From 2005 to 2009 he was working at the Pattern Recognition Lab at the Computer Science Department of the University of Erlangen-Nuremberg. His major research subject was medical signal processing in speech data. In this period, he developed the first online speech intelligibility assessment tool - PEAKS - that has been used to analyze over 4.000 patient and control subjects so far.
From 2009 to 2010, he started working on flat-panel C-arm CT as post-doctoral fellow at the Radiological Sciences Laboratory in the Department of Radiology at the Stanford University. From 2011 to 2012 he joined Siemens Healthcare as innovation project manager and was responsible for reconstruction topics in the Angiography and X-ray business unit.
In 2012, he returned the University of Erlangen-Nuremberg as head of the Medical Reconstruction Group at the Pattern Recognition lab. In 2015 he became professor and head of the Pattern Recognition Lab. Since 2016, he is member of the steering committee of the European Time Machine Consortium. In 2018, he was awarded an ERC Synergy Grant "4D nanoscope". Current research interests focuses on medical imaging, image and audio processing, digital humanities, and interpretable machine learning and the use of known operators