Dr. Nicholas Cummins

Wiss. Mitarbeiter/ Habilitand
Lehrstuhl für Embedded Intelligence for Health Care and Wellbeing
Telefon: +49 (0) 821 598 - 2907
E-Mail:
Raum: 317 (F)
Adresse: Universitätsstraße 18, 86159 Augsburg

About me

I was awarded my PhD in Electrical Engineering from UNSW Australia in February 2016 for my thesis ‘Automatic assessment of depression from speech: paralinguistic analysis, modelling and machine learning’.

 

My current research interests include multisensory signal analysis, affective computing, and computer audition. I am fascinated by the application of machine learning techniques to improve our understanding of different health conditions. I am particularly interested in applying these techniques to mental health disorders.

 

I am actively involved in the  RADAR-CNS DE-ENIGMA TAPAS and  sustAGE Horizon 2020 projects, in which my roles include contributions towards management of the technical work packages. I enjoy working towards solving real-world problems in health and wellbeing as part of these inter-disciplinary teams.

 

I have been lecturing since autumn 2017, writing and delivering courses in speech pathology, deep learning and intelligent signal analysis in medicine. I am also an external advisor on the National Science Foundation of China (NSFC) funded project, "Diagnosis of Depression by Speech Signals" (grant No.31860285), led by Dr. Xiaoyong Lu, Northwest Normal University, Lanzhou, China.

 

I have (co-)authored over 70 conference and journal papers that have more than 1000 citations (h-index: 18). I am a frequent reviewer for IEEE, ACM and ISCA journals and conferences as well as serving on program and organisational committees. I am also a member of ACM, ISCA, IEEE and the IET.

Selected publications

N. Cummins, S. Scherer, J. Krajewski, S. Schnieder, J., and T. F. Quatieri, “ A review of depression and suicide risk assessment using speech analysis,“ Speech Communication, vol. 71, pp. 10-49, 2015.

 

N. Cummins, J. Joshi, A. Dhall, V. Sethu, R. Goecke, and J. Epps, „ Diagnosis of depression by behavioural signals: a multimodal approach,“ in Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge (AVEC '13), 2013, pp. 11-20.

 

N. Cummins, S. Amiriparian, G. Hagerer, A. Batliner, S. Steidl, and B. W. Schuller. 2017. “ An image-based deep spectrum feature representation for the recognition of emotional speech,” in Proceedings of the 25th ACM international conference on Multimedia (MM '17), 2017, pp. 478-484.

 

Z. Zhang, N. Cummins, and B. Schuller, “ Advanced Data Exploitation in Speech Analysis: An overview,” IEEE Signal Processing Magazine, vol. 34, no. 4, pp. 107 – 129, 2017.

 

F. Ringeval, B. Schuller, M. Valstar, J. Gratch, R. Cowie, S. Scherer, S. Mozgai, N. Cummins, M. Schmitt M. Pantic, “ AVEC 2017: Real-life Depression, and Affect Recognition Workshop and Challenge,” in Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge (AVEC '17), 2017, pp. 3-9.

Suche