Robots are coming for your job (if you are a radiologist)

Reading time: 4 minutes

Morgan McSweeney

What can humans do better than robots? For most of history, the answer to that question has been.. everything. However, the balance of power is rapidly shifting away from warm, fleshy, humans toward cold, calculating, processing power. Did you know that a few years ago, an artificial intelligence (AI) method called DeepStack beat humans in poker? In a study of 44,000 rounds of poker, DeepStack competed against human professional players, and it won with statistical significance. Here, winning with “statistical significance” means that the researchers calculated a >95% chance that the AI algorithm is truly better than expert human poker players, and that it does not rely on luck alone. The next time you sit down to a game of cards with your friends, you might want to check to be sure that none of them are plugged into the wall.

This is the future that we live in, and AI algorithms are rapidly taking the #1 spot that humans have held for a long time. AI methods are really good at analyzing images to detect trends, categorize patterns, and provide quantitative outputs. This skillset nearly perfectly describes strategies that are fundamental to radiology, the interpretation of X-Rays, MRIs, and other forms of medical imaging.

A recent study tested a new AI method against the interpretations of 101 radiologists. This study used a sample set of data from 2,652 breast cancer exams, and each of the exams had follow-up data to determine whether the patient actually had breast cancer (but this was hidden from the AI method). The research question of interest was simple: is the new AI method any worse than radiologists at the task of looking at the exam images and determining if each patient did or did not have breast cancer? They found that, statistically, the AI method was not worse than the radiologists. It would seem that we are not many years away from being able to dramatically lower the cost of radiographic interpretation, replacing expensive, highly-trained humans with expensive, highly-trained AI. The difference, however, is that the cost of developing an AI method only has to be paid once. Human workers, on the other hand, often prefer to be paid as long as they are still providing their services.

However, it is not as simple as it might at first seem. AI methods can be very sensitive to the type of data that is used to initially train the model. A common problem in method development is called overfitting, where the AI model reads too much into quirks of the training data set. For example, it just might happen that breast cancer-positive images in the training data have a common trend of noisey, pixely distortion, as a matter of pure chance. If an AI method determines that noise to be a good predictor of “cancer”, it will perform well in training data, but will ultimately fail when it is applied to a new dataset that lacks the same random patterns of noise.

To address this issue, humans can carefully pick data and medical scans devoid of artifacts, ensuring a curated standard of quality for the images that are used to develop the AI model. However, this re-introduces humans to the process of model implementation, and we are trying to minimize human involvement as much as possible. Undoubtedly, an AI method would still be dramatically faster at rapidly sorting through many layers of imaging data, even if humans had to briefly scan an image set to make sure is of good quality. Nonetheless, the need for human involvement to ensure accuracy is a key rate-limiting factor.

Recent work has focused on ways to enable so-called “unsupervised” segmentation of biomedical imaging data. Some of these efforts make use of a strategy called “adversarial networks”, in which two AI methods are programmed to compete against each other as they build a coherent model. For example, as one half of the model (the “analyzer”) starts to build rules categorizing whether the data it receives is or is not a tumor, the other half of the model (“the generator”) will create fake images and see if it can fool the analyzer into thinking they are real. Over time, the analyzer gets better at telling true from false images, and the generator gets better at creating fake tumor images. Combined, these adversarial algorithms are capable of optimizing their image-discrimination rules, with less requirement for hand-holding from humans and better generalizability.

Where will this technology take us? It seems that we are on a trajectory for the development of AI methods that are at least as accurate as human radiologists. Presumably, once the up-front development costs have been paid, these AI methods will be dramatically less expensive than paying for humans to interpret every medical scan. Between CT and MRI scans alone, there are millions of medical images generated per year, so this could add up to considerable savings.

Work Discussed:

Gubern-Merida, A., Rodriguez-Ruiz, A., Gennaro, G., Andersson, I., Lång, K., Chevalier, M., . . . Sechopoulos, I. (2019). Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists. doi: 10.1093/jnci/djy222

Image Credits

How is Artificial Intelligence Shaping our Future? Rajesh Sahu

One thought on “Robots are coming for your job (if you are a radiologist)

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: