Research from a cross-divisional collaboration between the Acoustics Laboratory and Professor Mark Gales's group has won prizes in an international challenge to use machine learning to classify heart sound recordings.
Teams were asked to design an algorithm that can use electronic stethoscope recordings to predict whether the patient has a heart murmur, and further whether the patient's final clinical diagnosis was "normal" or "abnormal".
The George B. Moody PhysioNet Challenge is an annual, international competition where teams are tasked to tackle clinically interesting questions that are either unsolved or not well-solved. The George B. Moody PhysioNet Challenge 2022 invited participants to identify murmurs and clinical outcomes using heart sound recordings collected from multiple auscultation locations.
A team from the Department: Andrew McDonald, Postdoctoral researcher, Mark Gales Professor of Information Engineering and Anurag Agarwal, Professor of Acoustics and Biomedical Technology, entered their approach into the George B. Moody PhysioNet Challenge 2022. Overall, they won the top prize in both tasks.
Their entry was developed from research that aims to improve access to early diagnosis and treatment of heart disease, carried out during the PhDs of Edmund Kay and Andrew McDonald. The research was presented by Andrew at the annual Computing in Cardiology conference in Tampere, Finland, in September.
The stethoscope is a common medical tool used by doctors to listen to the chest (this is known as auscultation). Auscultation is a quick and non-invasive to screen for cardiac abnormalities, by listening for abnormal heart sounds (murmurs) or rhythms. However, it requires significant training and skill to perform accurately. Automated analysis of heart sound recordings is a promising solution to improve the accuracy and accessiblity of auscultation, making it a tool for a wider use in the community by any healthcare professional. The George B. Moody PhysioNet Challenge 2022 tasked teams with detecting and classifying murmurs in a new paediatric dataset, collected as part of two screening programs in Brazil.
Teams were asked to design an algorithm that can use electronic stethoscope recordings to predict whether the patient has a heart murmur, and further whether the patient's final clinical diagnosis was "normal" or "abnormal". The performance of each team's algorithm was then evaluated on a hidden test set at the end of the challenge, to assess how well the approaches would generalise to new data. 40 teams from universities and companies across the world competed officially.
The Department's Acoustics Laboratory has been researching automated heart sound analysis since 2016, when they entered the 2016 version of the challenge (also on automated heart sound analysis) and achieved third place. Building on this research, they were awarded an MRC DPFS grant to develop an algorithm to use stethoscope recordings to classify the presence of valvular heart disease in patients. This grant funded an NHS clinical study to collect patient data from multiple NHS trusts across the UK (Papworth, Birmingham, King's College London, Imperial College London, Oxford). The team have recently finished data collection, with over 1250 patients recruited.
They hope to publish their approach as part of a special journal issue on the challenge later in the year. The algorithm code will also be published, under an available source licence for academic use.