Machine learning and its endgame, true artificial intelligence, attract trepidation and excitement in nearly equal measure. When we apply it to stubborn problems and unanswered questions in biomedical and technological research, however, the positive implications come into sharp relief.

Research engineers at Duke University, including professor Roarke Horstmeyer, recently leveraged machine learning in the pursuit of an exciting and medically consequential breakthrough. The group wanted to build a microscope that adapts intelligently and in real-time to achieve the best diagnostic results. Even more importantly, they needed this apparatus to outperform trained doctors.

Here’s how they did it and what it means for diagnostic medicine.

A Proof of Concept With Lots of Potential

To hear the Duke University engineers tell it, the problem under scrutiny here is hundreds of years old.

A run-of-the-mill microscope illuminates a sample with the same amount of light coming from all directions. Scientists have gradually refined these devices over centuries, with a combination of angles, light patterns, colors, focusing elements, and other settings that are optimized and tuned to human eyes specifically.

This is a limitation, but also an opportunity. With the right hardware and software, a microscope could be made to choose its optimal optical settings based on ambient conditions and the nature of the sample.

Here’s a breakdown:

  • Hardware: The microscope features a specially designed bowl covered in LEDs that can emit several different lighting patterns and colors. Previous models used a single source of white light, from below the slide, to provide even illumination to the entire sample.\
  • Software: Machine -earning algorithms automatically adjust the number, color, arrangement, and brightness of each LED. This provides the best possible magnification and diagnostic environment based on the known parameters of the disease or condition being looked for.

The unique functionality of the microscope is best understood through an example.

As a sample problem, the engineers wanted to find and classify malaria within multiple blood samples of varying thicknesses. A thin blood smear requires different lighting conditions to accomplish the same detection and diagnostic task than does a thick one.

This intelligent microscope makes its adjustments on the fly to the focus and lighting setting to eliminate manual intervention. This improves processing time and even boosts accuracy.

Research Methodology and Implications

So how does a microscope go about “learning” how to optimize a diagnostic environment and better identify pathogens and other health issues?

For the sample malaria problem, the engineers “trained” a neural network by having it process several hundred existing blood samples with malaria present. As a result of this process, the neural network learned to:

  1. Identify and rank which features of a particular blood sample were the most useful signifiers of disease, and
  2. Determine which LED lights to illuminate beneath the sample to best call attention to those signifiers.

The neural network ended up recommending a ring-shaped lighting pattern that illuminated the samples from a high angle.

Interestingly, the images produced with this method are grainier than those made by a conventional microscope. Even so, the engineers’ scope and neural network correctly identified malaria in thin blood samples with a 90% success rate. For thicker blood samples, the results were even more impressive at 99%.

Importantly, the group’s results were confirmed by another lab when Horstmeyer’s group passed along their neural network-derived lighting pattern for peer review. Even trained physicians have only around a 75% success rate in diagnosing the presence of the malaria parasite using more familiar tools.

It’s not uncommon for visual artifacts to make parasite detection and identification difficult with standard equipment in even well-outfitted health care environments. That’s just one of the reasons why applying machine learning to diagnostic instruments is so meaningful for the future of the diagnostic medicine community.

Another improvement is the time required. According to Horstmeyer, doctors must often look at “a thousand cells” or more before they find even one telltale sign of the malaria parasite. Because of the level of magnification required, physicians can only look at up to a dozen cells at any one time.

What Other Goals Are on the Horizon?

Duke University engineers hope to streamline the diagnostic process further and make it more accessible for health care environments with fewer resources, including trained physicians. For example, some types of samples require more time to prepare for the microscope than others, even if processing each sample takes roughly the same amount of time.

Malaria is not the only pathogen that a microscope must be able to pluck from a lineup of blood cells. With that in mind, Horstmeyer is already working in earnest on the next iteration of this microscope. His team hopes to develop an algorithm that can deliver the same kind of real-time lighting intelligence, high speed, and accuracy, no matter the sample or the suspected diagnosis.

The whole image acquisition process is one of the most important in the medical community. It’s also one that, as Horstmeyer puts it, could benefit from some brains. “Computers can see things humans can’t,” he says.

Machine learning provides a way for some of our most critical diagnostic instruments to optimize their performance for “eyes” and “brains” that are more sophisticated and capable than our own — at least at tasks like these. It’s an impressive and consequential step forward for both medicine and machine learning.