Researchers from Facebook and NYU Langone Health have created AI models that scan X-rays to predict how the condition of a COVID-19 patient will develop.
The team says their system can predict whether a patient needs more intensive resources more than four days ahead of time. They believe that hospitals can use it to meet the demand for resources and avoid sending patients at risk home too early.
Their approach differs from most previous attempts to predict COVID-19 degradation by applying machine learning techniques to X-rays.
These usually use supervised training and some time frames. This method has shown promise, but its potential is limited by the time-intensive process of manually labeling data.
[Read: How Netflix shapes mainstream culture, explained by data]
These limits led the researchers to use self-study learning instead.
They first pre-trained their system on two public X-ray data sets, using a self-supervised learning technique called Momentum Contrast (MoCO). This allowed them to use a large amount of non-COVID X-ray data to train their neural network to extract information from the images.
Predict weakening of COVID-19
They used the pre-trained model to build classifiers that predict whether the condition of a COVID-19 patient is likely to worsen. They then refined the model with an extended version of the NYU COVID-19 dataset.
This smaller data set of approx. 27,000 x-rays of 5,000 patients were labeled to indicate whether the patient’s condition deteriorated within 24, 48, 72, or 96 hours after the scan.
The team built one classifier that predicts the weakening of patients based on a single x-ray. Another one makes his predictions using a series of x-rays by combining the image features using a Transformer model. A third model estimates how much supplemental oxygen patients need by analyzing one X-ray.
They say it is very valuable to use a series of x-rays because it is more accurate for long-term predictions. This approach is also responsible for the development of infections over time.
Their study showed that the models were effective prediction of ICU needs, mortality forecasts and overall predictions of adverse events over the longer term (up to 96 hours):
Our multi-image model performance surpasses the performance of all single-image models. Compared to radiologists, our multi-image prediction model was comparable in its ability to predict deterioration in patients and stronger in its ability to predict mortality.
The team unlocked the pre-trained models so that other researchers and hospitals could refine them with their own COVID patient data – using a single GPU.
You can read the study piece on the preprint server Axiv.org.
Published on January 15, 2021 – 17:00 UTC