25 Sep 22. Featured Paper
Validation study of machine-learning chest radiograph software in primary and emergency medicine.
van Beek EJR, Ahn JS, Kim MJ, Murchison JT
AIM To evaluate the performance of a machine learning based algorithm tool for chest radiographs (CXRs), applied to a consecutive cohort of historical clinical cases, in comparison to expert chest radiologists.
MATERIALS AND METHODS The study comprised 1,960 consecutive CXR from primary care referrals and the emergency department (992 and 968 cases respectively), obtained in 2015 at a UK hospital. Two chest radiologists, each with >20 years of experience independently read all studies in consensus to serve as a reference standard. A chest artificial intelligence (AI) algorithm, Lunit INSIGHT CXR, was run on the CXRs, and results were correlated with those by the expert readers. The area under the receiver operating characteristic curve (AUROC) was calculated for the normal and 10 common findings: atelectasis, fibrosis, calcification, consolidation, lung nodules, cardiomegaly, mediastinal widening, pleural effusion, pneumothorax, and pneumoperitoneum.
RESULTS The ground truth annotation identified 398 primary care and 578 emergency department datasets containing pathologies. The AI algorithm showed AUROC of 0.881–0.999 in the emergency department dataset and 0.881–0.998 in the primary care dataset. The AUROC for each of the findings between the primary care and emergency department datasets did not differ, except for pleural effusion (0.954 versus 0.988, p<0.001).
CONCLUSIONS The AI algorithm can accurately and consistently differentiate normal from major thoracic abnormalities in both acute and non-acute settings, and can serve as a triage tool.
- Chest X-ray
- Artificial intelligence
- Machine learning
- Lunit INSIGHT CXR
Social media tags and titles
Featured paper: Validation study of machine-learning chest radiograph software in primary and emergency medicine