The CheXzero algorithm studied thousands of chest X-ray images and the accompanying clinical reports to diagnose diseases from the scans at a clinician’s level, said Harvard Medical School experts.
Researchers trained the model on a public dataset containing more than 377,000 chest X-ray images and 227,000 corresponding clinical reports. The algorithm was able to link specific image types with their descriptions by itself, thereby avoiding the need to structure and manually annotate the data.
Scientists tested CheXzero’s performance on separate datasets from two institutions, one located in another country. The researchers wanted to ensure the algorithm could map images to their descriptions even if the reports used different terminology.
According to experts, the AI model proved more effective than counterparts in diagnosing pneumonia, as well as lung injuries and lung collapse. In terms of accuracy, CheXzero was almost on a par with radiologists, they added.
\”For the first time, the algorithm learned from unstructured text and was able to match doctors in effectiveness, and also demonstrated the ability to predict multiple diseases from patients’ X-ray images with a high degree of accuracy,\” said co-author of the report Ekin Tiu.
The researchers have released the project’s source code for other researchers.
\”We hope that the algorithm can be applied to CT scans, MRI, and echocardiograms to teach it to detect a broader range of diseases in other parts of the body,\” said project leader Pranav Rajpurkar.
In his view, diagnostic AI models that require minimal supervision in operation will help people access medical care in countries with a shortage of specialists.
In September, American scientists stated that they would develop an artificial intelligence algorithm for detecting diseases by voice.
In March 2021, researchers used AI to decipher the X-ray images of three-hundred-year-old letters.
Subscribe to ForkLog AI news on Telegram: ForkLog AI — all the news from the world of AI!
