Article written by Zac Unger
Bottom: Professor Cuadros and his telemedicine class.
Photos by Elena Zhukova.
When our own health is the subject of discussion, however, most of us prefer that medical diagnoses be handled by doctors rather than computers. But what if we could combine the massive data-crunching power of a computer with the intuitiveness and hands-on skills of a well-trained clinician? Dr. Jorge Cuadros, Assistant Clinical Professor at Berkeley Optometry, is doing exactly that as he fights to stem the tide of diabetic retinopathy, a widespread and rapidly proliferating condition that is one of the greatest worldwide threats to eyesight. Some estimates place the prevalence of diabetic retinopathy at about 100 million cases worldwide. In cutting-edge collaborations with Google, the California Health Care Foundation, and Kaggle—a crowdsourcing website for data and statistics competitions—Cuadros and his colleagues are harnessing the power of Big Data to vastly expand the numbers of patients reached and the amount of vision preserved.
Diabetic retinopathy is a scourge that can affect anyone with elevated levels of sugar in their blood, though it primarily impacts people who have had diabetes for many years. In the United States it is the leading cause of blindness for people between ages of 20 and 64. In the early stages of the disease, the small blood vessels of the retina are damaged; as the condition progresses unchecked — and often unnoticed even by the patient —the vessels of the eye weaken and become leaky. In the final stage the lack of blood flow and oxygen causes the retina to grow new vessels that can bleed or cloud over the retina. Patients will experience spotty, dark, blurred and generally reduced vision. Eventually, sufferers may be left in total darkness.
Fortunately, treatments for diabetic retinopathy are readily available and often successful. Laser treatments, injections, and even surgery are possible if the disease is caught in time. The single most important factor is ongoing management of blood sugar, especially in the early days of the disease. But treatment can only follow detection; most patients can barely find the time to visit their primary care physician, much less an optometrist, in order to screen for a condition that often has no symptoms. While a family doctor might suggest screening for diabetic retinopathy, most general practitioners don’t have the specialized skills to do it themselves as part of a regular visit.
As far back as 1994, Dr. Cuadros began attempting to solve this problem using telemedicine to diagnose eye diseases from afar. In the early 2000’s, Cuadros and Wyatt Tellis, a colleague from UCSF, developed a non proprietary web-based application called EyePACS (Eye Picture Archive Communication System) in which primary care clinics would install retinal cameras, send out the pictures electronically, and have patients diagnosed by off-scene optometrists. Starting with a single clinic in Fresno, the program quickly expanded across the country, and today it is used in hundreds of clinics across 41 states. By spring of 2017, the EyePACS network had performed over half a million retinal exams. But that success wasn’t nearly enough for Cuadros, especially when faced with estimates that over 6 million people will suffer from diabetic retinopathy by the year 2020. “With an expected rise in diabetes,” Cuadros says there are more people who need to be seen, and some say that we just don’t have enough practitioners to keep up.
But sending retinal images off to be read by trained optometrists can take hours or days, by which time the patient is long gone and often difficult to contact. And that’s where Big Data comes in. If computers can plot the fastest route through terrible rush-hour traffic, recognize fraudulent credit card charges, or accurately predict the next book you’re going to love, why shouldn’t they be able to help doctors identify patterns of disease in a human eyeball?
“If you have an immediate read and an immediate diagnosis, that would identify patients while they’re still in the office,” says Dr. Cuadros. “You wouldn’t have to have a separate session, try to get them back in once it’s become an afterthought. Right then and there you could show them the picture and get them engaged with the need for treatment, help engage them in blood sugar control so they can save their vision.” One study showed that only about 20% of people diagnosed with serious diabetic retinopathy actually performed the appropriate follow-up or treatment with a specialist. Dr. Nwando Olayiwola, Associate Clinical Professor at UC San Francisco and Chief Clinical Transformation Officer of RubiconMD, has worked with Cuadros for years setting up cameras in clinics, and says “we have a lot of day workers and migrant farmers in California and if they take a day off work for a visit, that has a real impact. Getting them back a second or third time can be impossible or impose real socioeconomic hardships.”
Cuadros’ long experience with clinics provided him with over three million high-resolution images of retinas, both healthy ones and those with various degrees of diabetic retinopathy. Cuadros and others began to think about how computers could be trained to recognize diabetic retinopathy instantly, without the time-consuming step of sending the image out to a flesh-and-blood doctor. The concept was that if you fed enough images into a computer, an artificially intelligent algorithm would eventually train itself to be expert at recognizing the signs of diabetic retinopathy. The idea was exciting enough that it was explored by the non-profit California Health Care Foundation and, later, Google, whose mission statement—“to organize the world’s information and make it universally accessible and useful”—seems particularly valuable when turned towards preventing blindness rather than, say, figuring out whether the Red Sox hit into more double plays at home or on the road.
But as with anything, the quality of the data coming out is only as good as the data going in. A system that misdiagnosed serious disease could potentially be worse for a patient than not getting screened at all. Fortunately, Cuadros had a dataset that was not only vast, but one that had also been evaluated by highly-trained specialists. The hunt was on for an artificial intelligence that could diagnose diabetic retinopathy as reliably as a specialty clinician. And, Silicon Valley being what it is, that search quickly took the form of an open-source competition—with $100,000 in prize money for whoever could develop the best algorithm.
Sponsored by the California Health Care Foundation, the competition was hosted by the website Kaggle.com. Using images from the EyePACS database, competitors were supplied with 50,000 images to train the algorithm and another 50,000 to test the results. All of these images had already been graded by multiple skilled doctors and placed into one of five categories depending on severity of the disease. The competition guidelines warned that “you will encounter noise in both the images and labels. Images may contain artifacts, be out of focus, underexposed, or overexposed.” The algorithm couldn’t just function when all conditions were perfect; like a real doctor it needed to deal with all the vagaries presented by real patients.
In the end, over 650 teams tried their luck. And within six months, the leading algorithms were as good at grading diabetic retinopathy as the experts. “I was floored,” says Cuadros. “There were so many algorithms that performed so well.” The 2015 Kaggle competition was just the beginning, and a few years later Cuadros supplied his EyePACS data to Google, so their machine-learning specialists could improve on the work done by the open-source Kaggle competition. One Google team began with 128,000 anonymized retinal images, pared down to about 10,000 of which had been graded by eight retina specialists. Another run at the goal started with 1.6 million images. In the end, the results were clear: the algorithm performed as well as or better than the doctors, detecting over 97% of diabetic retinopathy severe enough to be referred for treatment. And, of course, the algorithm made its diagnosis in a matter of seconds, rather than the days a clinician would need.
One fascinating side note is that while doctors have a specific set of landmarks they look for to diagnose diabetic retinopathy, none of those factors were ever “taught” to the computer. Instead, the machines processed the images and their related severity grades and “learned” how to make diagnoses, perhaps using the same road signs as doctors or perhaps using an entirely different set of factors unknown to clinicians.
Discuss any aspect of artificial intelligence for long enough, and sooner or later the conversation will turn to one question: When will the robots replace us? Dr. Cuadros is quick to dismiss this worry, describing this new computerized process as “a good tool for quality assurance of humans, a good first pass so we can help to immediately triage patients.” A computerized screening for diabetic retinopathy is just that: an initial screening for one disease. It’s not treatment, it’s not patient education, and it’s not a comprehensive eye exam that could catch multiple conditions. Computerized glaucoma detection, for example, might produce too many false positives and negatives to be of any real use. “So we still encourage everyone to get an eye exam,” Cuadros says. “But the problem is that we know that people still don’t. And that’s not going to change with algorithms.” As far as being replaced, Dr. Olayiwola isn’t concerned that clinics are about to become obsolete; she believes that “high quality automation could be an important modality for many populations that struggle with diabetes” but for whom the burden of too many visits to specialists would be prohibitive. “There is tremendous potential for patients with barriers to mobility, financial resources and transportation,” she says. “It could be incredibly valuable to have the computer recognize patterns and perform quality control without the expense of seeing an eye doctor every time, and reserving the visits to the eye doctor for those that are most essential.”
For as much promise as artificial intelligence shows for screening patients in the United States, the potential in the developing world is even greater. Dr. Cuadros and his EyePACS colleagues have large screening projects in Mexico, Colombia, and Armenia; they’ve also done work in Guyana and Djibouti. One additional advantage of this global reach is that Dr. Cuadros is able to incorporate a much more diverse set of eyes into his database. “People come in all different colors and configurations,” he says. “In the past, some databases were very European and monochromatic. If you train your algorithm on just one color, then it’s not going to perform well across the board with everybody.”
Whether the algorithm is set loose in the United States or abroad, Dr. Cuadros is adamant that the focus always needs to be on the patient, not the technology. “Our goal, always, is that artificial intelligence will be used to help engage people in their care.” To that end, Dr. Cuadros provides primary care clinicians with resources and educational materials to help them interpret results for their patients and refer them on to specialists for treatment. In addition, the algorithm is available for free to patients, to be used by any clinician who wants a new way to help them.
Using artificial intelligence to grade diabetic retinopathy is still in the experimental stages. Dr. Cuadros is waiting for approval from the FDA, but feels confident that “within one to five years we should find it prevalent out there in the field.” In addition, the technology shows promise for monitoring the progression of other eye diseases and also predicting which patients might be at risk of developing pathologies to begin with.
But no matter how far the technology spreads, Cuadros is clear on one thing: “we’re not just interested in the red light, green light scenario and getting a diagnosis and having that be the end of it. We need to always guide this in a way that’s going to be patient centric.” Artificial intelligence has great promise, but it doesn’t work if it ignores the lived experience of the people who use it. “In the end it all depends on how we as humans, as a society, make technology work for our patients.”