Dr Emmanouil Benetos

Reader (Associate Professor) in Machine Listening & Director of Research
Email: emmanouil.benetos@qmul.ac.ukTelephone: +44 20 7882 6206Room Number: Engineering, Eng 403Website: https://webspace.eecs.qmul.ac.uk/emmanouil.benetos/Office Hours: Wednesday 15:00-16:00
Profile
Emmanouil Benetos is Reader (US equivalent: Associate Professor) in Machine Listening and Director of Research at the School of Electronic Engineering and Computer Science of Queen Mary University of London. Within Queen Mary, he is member of the Centre for Digital Music, Centre for Intelligent Sensing, and Digital Environment Research Institute, and co-leads the School's Machine Listening Lab.
His main research topic is computational audio analysis, also referred to as machine listening or computer audition - applied to music, urban, everyday and nature sounds. He has been Royal Academy of Engineering / Leverhulme Trust Research Fellow in resource-efficient machine listening, Turing Fellow at the Alan Turing Institute, Royal Academy of Engineering Research Fellow, and has been principal- and co-investigator for several funded research projects in the intersection of machine learning and audio. He is also Deputy Director for the UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM).
On academic service, he is currently secretary for the International Society for Music Information Retrieval (ISMIR), member & chair of the education subcommittee of the IEEE Technical Committee on Audio and Acoustic Signal Processing (AASP TC), member of the EURASIP Acoustic, Speech and Music Signal Processing Technical Area Committee (ASMSP TAC), associate editor for the IEEE/ACM Transactions on Audio, Speech, and Language Processing, and associate editor for the EURASIP Journal on Audio, Speech, and Music Processing.
Teaching
Data Mining (Postgraduate)
Data that has relevance for decision-making is accumulating at an incredible rate due to a host of technological advances. Electronic data capture has become inexpensive and ubiquitous as a by-product of innovations such as the Internet, e-commerce, electronic banking, point-of-sale devices, bar-code readers, and electronic patient records. Data mining is a rapidly growing field that is concerned with developing techniques to assist decision-makers to make intelligent use of these repositories. The field of data mining has evolved from the disciplines of statistics and artificial intelligence. This module will combine practical exploration of data mining techniques with a exploration of algorithms, including their limitations. Students taking this module should have an elementary understanding of probability concepts and some experience of programming.
Music Informatics (Postgraduate/Undergraduate)
This module introduces students to state-of-the-art methods for the analysis of music data, with a focus on music audio. It presents in-depth studies of general approaches to the low-level analysis of audio signals, and follows these with specialised methods for the high-level analysis of music signals, including the extraction of information related to the rhythm, melody, harmony, form and instrumentation of recorded music. This is followed by an examination of the most important methods of extracting high-level musical content, sound source separation, and on analysing multimodal music data.
Research
Research Interests:
Supervision
PhD Students (primary and joint supervisees)
- Yuhan Liu (co-supervised with Lin Wang) Topic: Structured Source-Level Representations for Controllable and Personalized Music Generation
Funded by the China Scholarship Council - Shahar Elisha Topic: Style classification of podcasts using audio
Funded by Spotify Ltd - Christos Plachouras (co-supervised with Johan Pauwels) Topic: Deep learning for low-resource music
Funded by the UKRI CDT in AI and Music (EP/S022694/1) - Antonella Torrisi Topic: Computational analysis of chick vocalisations: from categorisation to live feedback
Funded by a QMUL Principal's studentship - Aditya Bhattacharjee Topic: Self-supervised learning in audio fingerprinting
Funded by the UKRI CDT in AI and Music (EP/S022694/1) - Yinghao Ma Topic: Self-supervision in machine listening
Funded by the UKRI CDT in AI and Music (EP/S022694/1) - Jinhua Liang Topic: Everyday sound recognition with limited annotations
Funded by EPSRC DTP award EP/T518086/1 - Jiawen Huang Topic: Lyrics alignment and transcription For polyphonic music
Funded by the UKRI CDT in AI and Music (EP/S022694/1) - Inês Nolasco (co-supervised with Huy Phan and Dan Stowell). Topic: Automatic acoustic identification of individual animals in the wild
Funded by EPSRC DTP award EP/N50953X/1 - Shubhr Singh (co-supervised with Huy Phan and Dan Stowell). Topic: Novel mathematical methods for audio based deep learning
Funded by the UKRI CDT in AI and Music (EP/S022694/1) - Lele Liu Topic: Automatic music score transcription with deep neural networks
Funded by the China Scholarship Council
PhD Students (second supervisees)
- Pablo Tablas de Paula Topic: Machine learning and digital waveguides for musical instrument synthesis and analysis
- Weixiong Chen Topic: Sparse architectures with semantic alignment for music understanding
- Ivan Shanin Topic: Modeling melodic jazz improvisation
- Yu Cao Topic: Generative modeling with few-shot learning
- Julien Guinot Topic: Improved self-supervised learning and human-in-the-loop for musical audio: towards expert, navigable, and interpretable representations of music
- Chin-Yun Yu Topic: Analysing and controlling extreme vocal expression using differentiable DSP and neural networks
- Christopher Mitcheltree Topic: Representation learning for audio effect and synthesizer modulations
- Yisu Zong Topic: Machine learning for physical models of sound synthesis
- Huan Zhang Topic: Computational modelling of expressive piano performance
- Andrew Edwards Topic: Computational models for jazz piano: transcription, analysis, and generative modeling
- Xiaowan Yi Topic: Composition-aware music recommendation system for music production
Research Assistants
- Jackson Loth (June 2025 - May 2026). Project: Maestro - AI Musical Analysis Platform
- Weixiong Chen (Dec. 2025 - March 2026). Project: Large language models for multimodal music understanding and ethical audio generation